Science.gov

Sample records for 3-d imaging laser

  1. 3D laser imaging for concealed object identification

    NASA Astrophysics Data System (ADS)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  2. Triangulation Based 3D Laser Imaging for Fracture Orientation Analysis

    NASA Astrophysics Data System (ADS)

    Mah, J.; Claire, S.; Steve, M.

    2009-05-01

    Laser imaging has recently been identified as a potential tool for rock mass characterization. This contribution focuses on the application of triangulation based, short-range laser imaging to determine fracture orientation and surface texture. This technology measures the distance to the target by triangulating the projected and reflected laser beams, and also records the reflection intensity. In this study, we acquired 3D laser images of rock faces using the Laser Camera System (LCS), a portable instrument developed by Neptec Design Group (Ottawa, Canada). The LCS uses an infrared laser beam and is immune to the lighting conditions. The maximum image resolution is 1024 x 1024 volumetric image elements. Depth resolution is 0.5 mm at 5 m. An above ground field trial was conducted at a blocky road cut with well defined joint sets (Kingston, Ontario). An underground field trial was conducted at the Inco 175 Ore body (Sudbury, Ontario) where images were acquired in the dark and the joint set features were more subtle. At each site, from a distance of 3 m away from the rock face, a grid of six images (approximately 1.6 m by 1.6 m) was acquired at maximum resolution with 20% overlap between adjacent images. This corresponds to a density of 40 image elements per square centimeter. Polyworks, a high density 3D visualization software tool, was used to align and merge the images into a single digital triangular mesh. The conventional method of determining fracture orientations is by manual measurement using a compass. In order to be accepted as a substitute for this method, the LCS should be capable of performing at least to the capabilities of manual measurements. To compare fracture orientation estimates derived from the 3D laser images to manual measurements, 160 inclinometer readings were taken at the above ground site. Three prominent joint sets (strike/dip: 236/09, 321/89, 325/01) were identified by plotting the joint poles on a stereonet. Underground, two main joint

  3. Pavement cracking measurements using 3D laser-scan images

    NASA Astrophysics Data System (ADS)

    Ouyang, W.; Xu, B.

    2013-10-01

    Pavement condition surveying is vital for pavement maintenance programs that ensure ride quality and traffic safety. This paper first introduces an automated pavement inspection system which uses a three-dimensional (3D) camera and a structured laser light to acquire dense transverse profiles of a pavement lane surface when it carries a moving vehicle. After the calibration, the 3D system can yield a depth resolution of 0.5 mm and a transverse resolution of 1.56 mm pixel-1 at 1.4 m camera height from the ground. The scanning rate of the camera can be set to its maximum at 5000 lines s-1, allowing the density of scanned profiles to vary with the vehicle's speed. The paper then illustrates the algorithms that utilize 3D information to detect pavement distress, such as transverse, longitudinal and alligator cracking, and presents the field tests on the system's repeatability when scanning a sample pavement in multiple runs at the same vehicle speed, at different vehicle speeds and under different weather conditions. The results show that this dedicated 3D system can capture accurate pavement images that detail surface distress, and obtain consistent crack measurements in repeated tests and under different driving and lighting conditions.

  4. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  5. Laser point cloud diluting and refined 3D reconstruction fusing with digital images

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Zhang, Jianqing

    2007-06-01

    This paper shows a method to combine the imaged-based modeling technique and Laser scanning data to rebuild a realistic 3D model. Firstly use the image pair to build a relative 3D model of the object, and then register the relative model to the Laser coordinate system. Project the Laser points to one of the images and extract the feature lines from that image. After that fit the 2D projected Laser points to lines in the image and constrain their corresponding 3D points to lines in the 3D Laser space to keep the features of the model. Build TIN and cancel the redundant points, which don't impact the curvature of their neighborhood areas. Use the diluting Laser point cloud to reconstruct the geometry model of the object, and then project the texture of corresponding image onto it. The process is shown to be feasible and progressive proved by experimental results. The final model is quite similar with the real object. This method cuts down the quantity of data in the precondition of keeping the features of model. The effect of it is manifest.

  6. Fusion of laser and image sensory data for 3-D modeling of the free navigation space

    NASA Technical Reports Server (NTRS)

    Mass, M.; Moghaddamzadeh, A.; Bourbakis, N.

    1994-01-01

    A fusion technique which combines two different types of sensory data for 3-D modeling of a navigation space is presented. The sensory data is generated by a vision camera and a laser scanner. The problem of different resolutions for these sensory data was solved by reduced image resolution, fusion of different data, and use of a fuzzy image segmentation technique.

  7. Terahertz Lasers Reveal Information for 3D Images

    NASA Technical Reports Server (NTRS)

    2013-01-01

    After taking off her shoes and jacket, she places them in a bin. She then takes her laptop out of its case and places it in a separate bin. As the items move through the x-ray machine, the woman waits for a sign from security personnel to pass through the metal detector. Today, she was lucky; she did not encounter any delays. The man behind her, however, was asked to step inside a large circular tube, raise his hands above his head, and have his whole body scanned. If you have ever witnessed a full-body scan at the airport, you may have witnessed terahertz imaging. Terahertz wavelengths are located between microwave and infrared on the electromagnetic spectrum. When exposed to these wavelengths, certain materials such as clothing, thin metal, sheet rock, and insulation become transparent. At airports, terahertz radiation can illuminate guns, knives, or explosives hidden underneath a passenger s clothing. At NASA s Kennedy Space Center, terahertz wavelengths have assisted in the inspection of materials like insulating foam on the external tanks of the now-retired space shuttle. "The foam we used on the external tank was a little denser than Styrofoam, but not much," says Robert Youngquist, a physicist at Kennedy. The problem, he explains, was that "we lost a space shuttle by having a chunk of foam fall off from the external fuel tank and hit the orbiter." To uncover any potential defects in the foam covering, such as voids or air pockets, that could keep the material from staying in place, NASA employed terahertz imaging to see through the foam. For many years, the technique ensured the integrity of the material on the external tanks.

  8. High-resolution 3D imaging laser radar flight test experiments

    NASA Astrophysics Data System (ADS)

    Marino, Richard M.; Davis, W. R.; Rich, G. C.; McLaughlin, J. L.; Lee, E. I.; Stanley, B. M.; Burnside, J. W.; Rowe, G. S.; Hatch, R. E.; Square, T. E.; Skelly, L. J.; O'Brien, M.; Vasile, A.; Heinrichs, R. M.

    2005-05-01

    Situation awareness and accurate Target Identification (TID) are critical requirements for successful battle management. Ground vehicles can be detected, tracked, and in some cases imaged using airborne or space-borne microwave radar. Obscurants such as camouflage net and/or tree canopy foliage can degrade the performance of such radars. Foliage can be penetrated with long wavelength microwave radar, but generally at the expense of imaging resolution. The goals of the DARPA Jigsaw program include the development and demonstration of high-resolution 3-D imaging laser radar (ladar) ensor technology and systems that can be used from airborne platforms to image and identify military ground vehicles that may be hiding under camouflage or foliage such as tree canopy. With DARPA support, MIT Lincoln Laboratory has developed a rugged and compact 3-D imaging ladar system that has successfully demonstrated the feasibility and utility of this application. The sensor system has been integrated into a UH-1 helicopter for winter and summer flight campaigns. The sensor operates day or night and produces high-resolution 3-D spatial images using short laser pulses and a focal plane array of Geiger-mode avalanche photo-diode (APD) detectors with independent digital time-of-flight counting circuits at each pixel. The sensor technology includes Lincoln Laboratory developments of the microchip laser and novel focal plane arrays. The microchip laser is a passively Q-switched solid-state frequency-doubled Nd:YAG laser transmitting short laser pulses (300 ps FWHM) at 16 kilohertz pulse rate and at 532 nm wavelength. The single photon detection efficiency has been measured to be > 20 % using these 32x32 Silicon Geiger-mode APDs at room temperature. The APD saturates while providing a gain of typically > 106. The pulse out of the detector is used to stop a 500 MHz digital clock register integrated within the focal-plane array at each pixel. Using the detector in this binary response mode

  9. Edge features extraction from 3D laser point cloud based on corresponding images

    NASA Astrophysics Data System (ADS)

    Li, Xin-feng; Zhao, Zi-ming; Xu, Guo-qing; Geng, Yan-long

    2013-09-01

    An extraction method of edge features from 3D laser point cloud based on corresponding images was proposed. After the registration of point cloud and corresponding image, the sub-pixel edge can be extracted from the image using gray moment algorithm. Then project the sub-pixel edge to the point cloud in fitting scan-lines. At last the edge features were achieved by linking the crossing points. The experimental results demonstrate that the method guarantees accurate fine extraction.

  10. Multiple-input multiple-output 3D imaging laser radar

    NASA Astrophysics Data System (ADS)

    Liu, Chunbo; Wu, Chao; Han, Xiang'e.

    2015-10-01

    A 3D (angle-angle-range) imaging laser radar (LADAR) based on multiple-input multiple-output structure is proposed. In the LADAR, multiple coherent beams are randomly phased to form the structured light field and an APD array detector is utilized to receive the echoes from target. The sampled signals from each element of APD are correlated with the referenced light to reconstruct the local 3D images of target. The 3D panorama of target can be obtained by stitching the local images of all the elements. The system composition is described first, then the operation principle is presented and numerical simulations are provided to show the validity of the proposed scheme.

  11. 3-D reconstruction of neurons from multichannel confocal laser scanning image series.

    PubMed

    Wouterlood, Floris G

    2014-01-01

    A confocal laser scanning microscope (CLSM) collects information from a thin, focal plane and ignores out-of-focus information. Scanning of a specimen, with stepwise axial (Z-) movement of the stage in between each scan, produces Z-series of confocal images of a tissue volume, which then can be used to 3-D reconstruct structures of interest. The operator first configures separate channels (e.g., laser, filters, and detector settings) for each applied fluorochrome and then acquires Z-series of confocal images: one series per channel. Channel signal separation is extremely important. Measures to avoid bleaching are vital. Post-acquisition deconvolution of the image series is often performed to increase resolution before 3-D reconstruction takes place. In the 3-D reconstruction programs described in this unit, reconstructions can be inspected in real time from any viewing angle. By altering viewing angles and by switching channels off and on, the spatial relationships of 3-D-reconstructed structures with respect to structures visualized in other channels can be studied. Since each brand of CLSM, computer program, and 3-D reconstruction package has its own proprietary set of procedures, a general approach is provided in this protocol wherever possible. PMID:24723320

  12. 3D imaging LADAR with linear array devices: laser, detector and ROIC

    NASA Astrophysics Data System (ADS)

    Kameyama, Shumpei; Imaki, Masaharu; Tamagawa, Yasuhisa; Akino, Yosuke; Hirai, Akihito; Ishimura, Eitaro; Hirano, Yoshihito

    2009-07-01

    This paper introduces the recent development of 3D imaging LADAR (LAser Detection And Ranging) in Mitsubishi Electric Corporation. The system consists of in-house-made key devices which are linear array: the laser, the detector and the ROIC (Read-Out Integrated Circuit). The laser transmitter is the high power and compact planar waveguide array laser at the wavelength of 1.5 micron. The detector array consists of the low excess noise Avalanche Photo Diode (APD) using the InAlAs multiplication layer. The analog ROIC array, which is fabricated in the SiGe- BiCMOS process, includes the Trans-Impedance Amplifiers (TIA), the peak intensity detectors, the Time-Of-Flight (TOF) detectors, and the multiplexers for read-out. This device has the feature in its detection ability for the small signal by optimizing the peak intensity detection circuit. By combining these devices with the one dimensional fast scanner, the real-time 3D range image can be obtained. After the explanations about the key devices, some 3D imaging results are demonstrated using the single element key devices. The imaging using the developed array devices is planned in the near future.

  13. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  14. Heterodyne 3D ghost imaging

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan

    2016-06-01

    Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate.

  15. 3D change detection at street level using mobile laser scanning point clouds and terrestrial images

    NASA Astrophysics Data System (ADS)

    Qin, Rongjun; Gruen, Armin

    2014-04-01

    Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical

  16. 3D Imaging of Nanoparticle Distribution in Biological Tissue by Laser-Induced Breakdown Spectroscopy

    NASA Astrophysics Data System (ADS)

    Gimenez, Y.; Busser, B.; Trichard, F.; Kulesza, A.; Laurent, J. M.; Zaun, V.; Lux, F.; Benoit, J. M.; Panczer, G.; Dugourd, P.; Tillement, O.; Pelascini, F.; Sancey, L.; Motto-Ros, V.

    2016-07-01

    Nanomaterials represent a rapidly expanding area of research with huge potential for future medical applications. Nanotechnology indeed promises to revolutionize diagnostics, drug delivery, gene therapy, and many other areas of research. For any biological investigation involving nanomaterials, it is crucial to study the behavior of such nano-objects within tissues to evaluate both their efficacy and their toxicity. Here, we provide the first account of 3D label-free nanoparticle imaging at the entire-organ scale. The technology used is known as laser-induced breakdown spectroscopy (LIBS) and possesses several advantages such as speed of operation, ease of use and full compatibility with optical microscopy. We then used two different but complementary approaches to achieve 3D elemental imaging with LIBS: a volume reconstruction of a sliced organ and in-depth analysis. This proof-of-concept study demonstrates the quantitative imaging of both endogenous and exogenous elements within entire organs and paves the way for innumerable applications.

  17. Recent development of 3D imaging laser sensor in Mitsubishi Electric Corporation

    NASA Astrophysics Data System (ADS)

    Imaki, M.; Kotake, N.; Tsuji, H.; Hirai, A.; Kameyama, S.

    2013-09-01

    We have been developing 3-D imaging laser sensors for several years, because they can acquire the additional information of the scene, i.e. the range data. It enhances the potential to detect unwanted people and objects, the sensors can be utilized for applications such as safety control and security surveillance, and so forth. In this paper, we focus on two types of our sensors, which are high-frame-rate type and compact-type. To realize the high-frame-rate type system, we have developed two key devices: the linear array receiver which has 256 single InAlAs-APD detectors and the read-out IC (ROIC) array which is fabricated in SiGe-BiCMOS process, and they are connected electrically to each other. Each ROIC measures not only the intensity, but also the distance to the scene by high-speed analog signal processing. In addition, by scanning the mirror mechanically in perpendicular direction to the linear image receiver, we have realized the high speed operation, in which the frame rate is over 30 Hz and the number of pixels is 256 x 256. In the compact-type 3-D imaging laser sensor development, we have succeeded in downsizing the transmitter by scanning only the laser beam with a two-dimensional MEMS scanner. To obtain wide fieldof- view image, as well as the angle of the MEMS scanner, the receiving optical system and the large area receiver are needed. We have developed the large detecting area receiver that consists of 32 rectangular detectors, where the output signals of each detector are summed up. In this phase, our original circuit evaluates each signal level, removes the low-level signals, and sums them, in order to improve the signalto- noise ratio. In the following paper, we describe the system configurations and the recent experimental results of the two types of our 3-D imaging laser sensors.

  18. Development of scanning laser sensor for underwater 3D imaging with the coaxial optics

    NASA Astrophysics Data System (ADS)

    Ochimizu, Hideaki; Imaki, Masaharu; Kameyama, Shumpei; Saito, Takashi; Ishibashi, Shoujirou; Yoshida, Hiroshi

    2014-06-01

    We have developed the scanning laser sensor for underwater 3-D imaging which has the wide scanning angle of 120º (Horizontal) x 30º (Vertical) with the compact size of 25 cm diameter and 60 cm long. Our system has a dome lens and a coaxial optics to realize both the wide scanning angle and the compactness. The system also has the feature in the sensitivity time control (STC) circuit, in which the receiving gain is increased according to the time of flight. The STC circuit contributes to detect a small signal by suppressing the unwanted signals backscattered by marine snows. We demonstrated the system performance in the pool, and confirmed the 3-D imaging with the distance of 20 m. Furthermore, the system was mounted on the autonomous underwater vehicle (AUV), and demonstrated the seafloor mapping at the depth of 100 m in the ocean.

  19. Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications

    NASA Astrophysics Data System (ADS)

    Budzan, Sebastian; Kasprzyk, Jerzy

    2016-02-01

    The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.

  20. 3D Imaging of Nanoparticle Distribution in Biological Tissue by Laser-Induced Breakdown Spectroscopy

    PubMed Central

    Gimenez, Y.; Busser, B.; Trichard, F.; Kulesza, A.; Laurent, J. M.; Zaun, V.; Lux, F.; Benoit, J. M.; Panczer, G.; Dugourd, P.; Tillement, O.; Pelascini, F.; Sancey, L.; Motto-Ros, V.

    2016-01-01

    Nanomaterials represent a rapidly expanding area of research with huge potential for future medical applications. Nanotechnology indeed promises to revolutionize diagnostics, drug delivery, gene therapy, and many other areas of research. For any biological investigation involving nanomaterials, it is crucial to study the behavior of such nano-objects within tissues to evaluate both their efficacy and their toxicity. Here, we provide the first account of 3D label-free nanoparticle imaging at the entire-organ scale. The technology used is known as laser-induced breakdown spectroscopy (LIBS) and possesses several advantages such as speed of operation, ease of use and full compatibility with optical microscopy. We then used two different but complementary approaches to achieve 3D elemental imaging with LIBS: a volume reconstruction of a sliced organ and in-depth analysis. This proof-of-concept study demonstrates the quantitative imaging of both endogenous and exogenous elements within entire organs and paves the way for innumerable applications. PMID:27435424

  1. Electromagnetic induction sounding and 3D laser imaging in support of a Mars methane analogue mission

    NASA Astrophysics Data System (ADS)

    Boivin, A.; Lai, P.; Samson, C.; Cloutis, E.; Holladay, S.; Monteiro Santos, F. A.

    2013-07-01

    The Mars Methane Analogue Mission simulates a micro-rover mission whose purpose is to detect, analyze, and determine the source of methane emissions on the planet's surface. As part of this project, both an electromagnetic induction sounder (EMIS) and a high-resolution triangulation-based 3D laser scanner were tested at the Jeffrey open-pit asbestos mine to identify and characterize geological environments favourable to the occurrence of methane. The presence of serpentinite in the form of chrysotile (asbestos), magnesium carbonate, and iron oxyhydroxides make the mine a likely location for methane production. The EMIS clearly delineated the contacts between the two geological units found at the mine, peridotite and slate, which are separated by a shear zone. Both the peridotite and slate units have low and uniform apparent electrical conductivity and magnetic susceptibility, while the shear zone has much higher conductivity and susceptibility, with greater variability. The EMIS data were inverted and the resulting model captured lateral conductivity variations through the different bedrock geological units buried beneath a gravel road. The 3D point cloud data acquired by the laser scanner were fitted with triangular meshes where steeply dipping triangles were plotted in dark grey to accentuate discontinuities. The resulting images were further processed using Sobel edge detection to highlight networks of fractures which are potential pathways for methane seepage.

  2. 3D ultrafast laser scanner

    NASA Astrophysics Data System (ADS)

    Mahjoubfar, A.; Goda, K.; Wang, C.; Fard, A.; Adam, J.; Gossett, D. R.; Ayazi, A.; Sollier, E.; Malik, O.; Chen, E.; Liu, Y.; Brown, R.; Sarkhosh, N.; Di Carlo, D.; Jalali, B.

    2013-03-01

    Laser scanners are essential for scientific research, manufacturing, defense, and medical practice. Unfortunately, often times the speed of conventional laser scanners (e.g., galvanometric mirrors and acousto-optic deflectors) falls short for many applications, resulting in motion blur and failure to capture fast transient information. Here, we present a novel type of laser scanner that offers roughly three orders of magnitude higher scan rates than conventional methods. Our laser scanner, which we refer to as the hybrid dispersion laser scanner, performs inertia-free laser scanning by dispersing a train of broadband pulses both temporally and spatially. More specifically, each broadband pulse is temporally processed by time stretch dispersive Fourier transform and further dispersed into space by one or more diffractive elements such as prisms and gratings. As a proof-of-principle demonstration, we perform 1D line scans at a record high scan rate of 91 MHz and 2D raster scans and 3D volumetric scans at an unprecedented scan rate of 105 kHz. The method holds promise for a broad range of scientific, industrial, and biomedical applications. To show the utility of our method, we demonstrate imaging, nanometer-resolved surface vibrometry, and high-precision flow cytometry with real-time throughput that conventional laser scanners cannot offer due to their low scan rates.

  3. High-resolution laser radar for 3D imaging in artwork cataloging, reproduction, and restoration

    NASA Astrophysics Data System (ADS)

    Ricci, Roberto; Fantoni, Roberta; Ferri de Collibus, Mario; Fornetti, Giorgio G.; Guarneri, Massimiliano; Poggi, Claudio

    2003-10-01

    A high resolution Amplitude Modulated Laser Radar (AM-LR) sensor has recently been developed, aimed at accurately reconstructing 3D digital models of real targets, either single objects or complex scenes. The sensor sounding beam can be swept linearly across the object or circularly around it, by placing the object on a controlled rotating platform, enabling to obtain respectively linear and cylindrical range maps. Both amplitude and phase shift of the modulating wave of back-scattered light are collected and processed, providing respectively a shade-free, high resolution, photographic-like picture and accurate range data in the form of a range image. The resolution of range measurements depends mainly on the laser modulation frequency, provided that the power of the backscattered light reaching the detector is at least a few nW (current best performances are ~100 μm). The complete object surface can be reconstructed from the sampled points by using specifically developed software tools. The system has been successfully applied to scan different types of real surfaces (stone, wood, alloys, bones), with relevant applications in different fields, ranging from industrial machining to medical diagnostics, to vision in hostile environments. Examples of artwork reconstructed models (pottery, marble statues) are presented and the relevance of this technology for reverse engineering applied to cultural heritage conservation and restoration are discussed. Final 3D models can be passed to numeric control machines for rapid-prototyping, exported in standard formats for CAD/CAM purposes and made available on the Internet by adopting a virtual museum paradigm, thus possibly enabling specialists to perform remote inspections on high resolution digital reproductions of hardly accessible masterpieces.

  4. Jigsaw phase III: a miniaturized airborne 3-D imaging laser radar with photon-counting sensitivity for foliage penetration

    NASA Astrophysics Data System (ADS)

    Vaidyanathan, Mohan; Blask, Steven; Higgins, Thomas; Clifton, William; Davidsohn, Daniel; Carson, Ryan; Reynolds, Van; Pfannenstiel, Joanne; Cannata, Richard; Marino, Richard; Drover, John; Hatch, Robert; Schue, David; Freehart, Robert; Rowe, Greg; Mooney, James; Hart, Carl; Stanley, Byron; McLaughlin, Joseph; Lee, Eui-In; Berenholtz, Jack; Aull, Brian; Zayhowski, John; Vasile, Alex; Ramaswami, Prem; Ingersoll, Kevin; Amoruso, Thomas; Khan, Imran; Davis, William; Heinrichs, Richard

    2007-04-01

    Jigsaw three-dimensional (3D) imaging laser radar is a compact, light-weight system for imaging highly obscured targets through dense foliage semi-autonomously from an unmanned aircraft. The Jigsaw system uses a gimbaled sensor operating in a spot light mode to laser illuminate a cued target, and autonomously capture and produce the 3D image of hidden targets under trees at high 3D voxel resolution. With our MIT Lincoln Laboratory team members, the sensor system has been integrated into a geo-referenced 12-inch gimbal, and used in airborne data collections from a UH-1 manned helicopter, which served as a surrogate platform for the purpose of data collection and system validation. In this paper, we discuss the results from the ground integration and testing of the system, and the results from UH-1 flight data collections. We also discuss the performance results of the system obtained using ladar calibration targets.

  5. Fusion of image and laser-scanning data in a large-scale 3D virtual environment

    NASA Astrophysics Data System (ADS)

    Shih, Jhih-Syuan; Lin, Ta-Te

    2013-05-01

    Construction of large-scale 3D virtual environment is important in many fields such as robotic navigation, urban planning, transportation, and remote sensing, etc. Laser scanning approach is the most common method used in constructing 3D models. This paper proposes an automatic method to fuse image and laser-scanning data in a large-scale 3D virtual environment. The system comprises a laser-scanning device installed on a robot platform and the software for data fusion and visualization. The algorithms of data fusion and scene integration are presented. Experiments were performed for the reconstruction of outdoor scenes to test and demonstrate the functionality of the system. We also discuss the efficacy of the system and technical problems involved in this proposed method.

  6. Estimating Mass Properties of Dinosaurs Using Laser Imaging and 3D Computer Modelling

    PubMed Central

    Bates, Karl T.; Manning, Phillip L.; Hodgetts, David; Sellers, William I.

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future

  7. Estimating mass properties of dinosaurs using laser imaging and 3D computer modelling.

    PubMed

    Bates, Karl T; Manning, Phillip L; Hodgetts, David; Sellers, William I

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future

  8. Long-range laser scanning and 3D imaging for the Gneiss quarries survey

    NASA Astrophysics Data System (ADS)

    Schenker, Filippo Luca; Spataro, Alessio; Pozzoni, Maurizio; Ambrosi, Christian; Cannata, Massimiliano; Günther, Felix; Corboud, Federico

    2016-04-01

    In Canton Ticino (Southern Switzerland), the exploitation of natural stone, mostly gneisses, is an important activity of valley's economies. Nowadays, these economic activities are menaced by (i) the exploitation costs related to geological phenomena such as fractures, faults and heterogeneous rocks that hinder the processing of the stone product, (ii) continuously changing demand because of the evolving natural stone fashion and (iii) increasing administrative limits and rules acting to protect the environment. Therefore, the sustainable development of the sector for the next decades needs new and effective strategies to regulate and plan the quarries. A fundamental step in this process is the building of a 3D geological model of the quarries to constrain the volume of commercial natural stone and the volume of waste. In this context, we conducted Terrestrial Laser Scanning surveys of the quarries in the Maggia Valley to obtain a detailed 3D topography onto which the geological units were mapped. The topographic 3D model was obtained with a long-range laser scanning Riegl VZ4000 that can measure from up to 4 km of distance with a speed of 147,000 points per second. It operates with the new V-line technology, which defines the surface relief by sensing differentiated signals (echoes), even in the presence of obstacles such as vegetation. Depending on the esthetics of the gneisses, we defined seven types of natural stones that, together with faults and joints, were mapped onto the 3D models of the exploitation sites. According to the orientation of the geological limits and structures, we projected the different rock units and fractures into the excavation front. This way, we obtained a 3D geological model from which we can quantitatively estimate the volume of the seven different natural stones (with different commercial value) and waste (with low commercial value). To verify the 3D geological models and to quantify exploited rock and waste volumes the same

  9. Nondestructive 3D confocal laser imaging with deconvolution of seven whole stardust tracks with complementary XRF and quantitative analysis

    SciTech Connect

    Greenberg, M.; Ebel, D.S.

    2009-03-19

    We present a nondestructive 3D system for analysis of whole Stardust tracks, using a combination of Laser Confocal Scanning Microscopy and synchrotron XRF. 3D deconvolution is used for optical corrections, and results of quantitative analyses of several tracks are presented. The Stardust mission to comet Wild 2 trapped many cometary and ISM particles in aerogel, leaving behind 'tracks' of melted silica aerogel on both sides of the collector. Collected particles and their tracks range in size from submicron to millimeter scale. Interstellar dust collected on the obverse of the aerogel collector is thought to have an average track length of {approx}15 {micro}m. It has been our goal to perform a total non-destructive 3D textural and XRF chemical analysis on both types of tracks. To that end, we use a combination of Laser Confocal Scanning Microscopy (LCSM) and X Ray Florescence (XRF) spectrometry. Utilized properly, the combination of 3D optical data and chemical data provides total nondestructive characterization of full tracks, prior to flattening or other destructive analysis methods. Our LCSM techniques allow imaging at 0.075 {micro}m/pixel, without the use of oil-based lenses. A full textural analysis on track No.82 is presented here as well as analysis of 6 additional tracks contained within 3 keystones (No.128, No.129 and No.140). We present a method of removing the axial distortion inherent in LCSM images, by means of a computational 3D Deconvolution algorithm, and present some preliminary experiments with computed point spread functions. The combination of 3D LCSM data and XRF data provides invaluable information, while preserving the integrity of the samples for further analysis. It is imperative that these samples, the first extraterrestrial solids returned since the Apollo era, be fully mapped nondestructively in 3D, to preserve the maximum amount of information prior to other, destructive analysis.

  10. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  11. Parallel deconvolution of large 3D images obtained by confocal laser scanning microscopy.

    PubMed

    Pawliczek, Piotr; Romanowska-Pawliczek, Anna; Soltys, Zbigniew

    2010-03-01

    Various deconvolution algorithms are often used for restoration of digital images. Image deconvolution is especially needed for the correction of three-dimensional images obtained by confocal laser scanning microscopy. Such images suffer from distortions, particularly in the Z dimension. As a result, reliable automatic segmentation of these images may be difficult or even impossible. Effective deconvolution algorithms are memory-intensive and time-consuming. In this work, we propose a parallel version of the well-known Richardson-Lucy deconvolution algorithm developed for a system with distributed memory and implemented with the use of Message Passing Interface (MPI). It enables significantly more rapid deconvolution of two-dimensional and three-dimensional images by efficiently splitting the computation across multiple computers. The implementation of this algorithm can be used on professional clusters provided by computing centers as well as on simple networks of ordinary PC machines. PMID:19725070

  12. Laser-wakefield accelerators as hard x-ray sources for 3D medical imaging of human bone.

    PubMed

    Cole, J M; Wood, J C; Lopes, N C; Poder, K; Abel, R L; Alatabi, S; Bryant, J S J; Jin, A; Kneip, S; Mecseki, K; Symes, D R; Mangles, S P D; Najmudin, Z

    2015-01-01

    A bright μm-sized source of hard synchrotron x-rays (critical energy Ecrit > 30 keV) based on the betatron oscillations of laser wakefield accelerated electrons has been developed. The potential of this source for medical imaging was demonstrated by performing micro-computed tomography of a human femoral trabecular bone sample, allowing full 3D reconstruction to a resolution below 50 μm. The use of a 1 cm long wakefield accelerator means that the length of the beamline (excluding the laser) is dominated by the x-ray imaging distances rather than the electron acceleration distances. The source possesses high peak brightness, which allows each image to be recorded with a single exposure and reduces the time required for a full tomographic scan. These properties make this an interesting laboratory source for many tomographic imaging applications. PMID:26283308

  13. Laser-wakefield accelerators as hard x-ray sources for 3D medical imaging of human bone

    NASA Astrophysics Data System (ADS)

    Cole, J. M.; Wood, J. C.; Lopes, N. C.; Poder, K.; Abel, R. L.; Alatabi, S.; Bryant, J. S. J.; Jin, A.; Kneip, S.; Mecseki, K.; Symes, D. R.; Mangles, S. P. D.; Najmudin, Z.

    2015-08-01

    A bright μm-sized source of hard synchrotron x-rays (critical energy Ecrit > 30 keV) based on the betatron oscillations of laser wakefield accelerated electrons has been developed. The potential of this source for medical imaging was demonstrated by performing micro-computed tomography of a human femoral trabecular bone sample, allowing full 3D reconstruction to a resolution below 50 μm. The use of a 1 cm long wakefield accelerator means that the length of the beamline (excluding the laser) is dominated by the x-ray imaging distances rather than the electron acceleration distances. The source possesses high peak brightness, which allows each image to be recorded with a single exposure and reduces the time required for a full tomographic scan. These properties make this an interesting laboratory source for many tomographic imaging applications.

  14. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  15. 3D imaging of biofilms on implants by detection of scattered light with a scanning laser optical tomograph

    PubMed Central

    Heidrich, Marko; Kühnel, Mark P.; Kellner, Manuela; Lorbeer, Raoul-Amadeus; Lange, Tineke; Winkel, Andreas; Stiesch, Meike; Meyer, Heiko; Heisterkamp, Alexander

    2011-01-01

    Biofilms – communities of microorganisms attached to surfaces – are a constant threat for long-term success in modern implantology. The application of laser scanning microscopy (LSM) has increased the knowledge about microscopic properties of biofilms, whereas a 3D imaging technique for the large scale visualization of bacterial growth and migration on curved and non-transparent surfaces is not realized so far. Towards this goal, we built a scanning laser optical tomography (SLOT) setup detecting scattered laser light to image biofilm on dental implant surfaces. SLOT enables the visualization of living biofilms in 3D by detecting the wavelength-dependent absorption of non-fluorescent stains like e.g. reduced triphenyltetrazolium chloride (TTC) accumulated within metabolically active bacterial cells. Thus, the presented system allows the large scale investigation of vital biofilm structure and in vitro development on cylindrical and non-transparent objects without the need for fluorescent vital staining. We suggest SLOT to be a valuable tool for the structural and volumetric investigation of biofilm formation on implants with sizes up to several millimeters. PMID:22076261

  16. 3D noninvasive, high-resolution imaging using a photoacoustic tomography (PAT) system and rapid wavelength-cycling lasers

    NASA Astrophysics Data System (ADS)

    Sampathkumar, Ashwin; Gross, Daniel; Klosner, Marc; Chan, Gary; Wu, Chunbai; Heller, Donald F.

    2015-05-01

    Globally, cancer is a major health issue as advances in modern medicine continue to extend the human life span. Breast cancer ranks second as a cause of cancer death in women in the United States. Photoacoustic (PA) imaging (PAI) provides high molecular contrast at greater depths in tissue without the use of ionizing radiation. In this work, we describe the development of a PA tomography (PAT) system and a rapid wavelength-cycling Alexandrite laser designed for clinical PAI applications. The laser produces 450 mJ/pulse at 25 Hz to illuminate the entire breast, which eliminates the need to scan the laser source. Wavelength cycling provides a pulse sequence in which the output wavelength repeatedly alternates between 755 nm and 797 nm rapidly within milliseconds. We present imaging results of breast phantoms with inclusions of different sizes at varying depths, obtained with this laser source, a 5-MHz 128-element transducer and a 128-channel Verasonics system. Results include PA images and 3D reconstruction of the breast phantom at 755 and 797 nm, delineating the inclusions that mimic tumors in the breast.

  17. 3-D laser images of splash-form tektites and their use in aerodynamic numerical simulations of tektite formation

    NASA Astrophysics Data System (ADS)

    Samson, C.; Butler, S.; Fry, C.; McCausland, P. J. A.; Herd, R. K.; Sharomi, O.; Spiteri, R. J.; Ralchenko, M.

    2014-05-01

    Ten splash-form tektites from the Australasian strewn field, with masses ranging from 21.20 to 175.00 g and exhibiting a variety of shapes (teardrop, ellipsoid, dumbbell, disk), have been imaged using a high-resolution laser digitizer. Despite challenges due to the samples' rounded shapes and pitted surfaces, the images were combined to create 3-D tektite models, which captured surface features with a high fidelity (≈30 voxel mm-2) and from which volume could be measured noninvasively. The laser-derived density for the tektites averaged 2.41 ± 0.11 g cm-3. Corresponding densities obtained via the Archimedean bead method averaged 2.36 ± 0.05 g cm-3. In addition to their curational value, the 3-D models can be used to calculate the tektites' moments of inertia and rotation periods while in flight, as a probe of their formation environment. Typical tektite rotation periods are estimated to be on the order of 1 s. Numerical simulations of air flow around the models at Reynolds numbers ranging from 1 to 106 suggest that the relative velocity of the tektites with respect to the air must have been <10 m s-1 during viscous deformation. This low relative velocity is consistent with tektite material being carried along by expanding gases in the early time following the impact.

  18. FELIX: a volumetric 3D laser display

    NASA Astrophysics Data System (ADS)

    Bahr, Detlef; Langhans, Knut; Gerken, Martin; Vogt, Carsten; Bezecny, Daniel; Homann, Dennis

    1996-03-01

    In this paper, an innovative approach of a true 3D image presentation in a space filling, volumetric laser display will be described. The introduced prototype system is based on a moving target screen that sweeps the display volume. Net result is the optical equivalent of a 3D array of image points illuminated to form a model of the object which occupies a physical space. Wireframe graphics are presented within the display volume which a group of people can walk around and examine simultaneously from nearly any orientation and without any visual aids. Further to the detailed vector scanning mode, a raster scanned system and a combination of both techniques are under development. The volumetric 3D laser display technology for true reproduction of spatial images can tremendously improve the viewers ability to interpret data and to reliably determine distance, shape and orientation. Possible applications for this development range from air traffic control, where moving blips of light represent individual aircrafts in a true to scale projected airspace of an airport, to various medical applications (e.g. electrocardiography, computer-tomography), to entertainment and education visualization as well as imaging in the field of engineering and Computer Aided Design.

  19. Acquisition and applications of 3D images

    NASA Astrophysics Data System (ADS)

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  20. Real-time microstructure imaging by Laue microdiffraction: A sample application in laser 3D printed Ni-based superalloys

    NASA Astrophysics Data System (ADS)

    Zhou, Guangni; Zhu, Wenxin; Shen, Hao; Li, Yao; Zhang, Anfeng; Tamura, Nobumichi; Chen, Kai

    2016-06-01

    Synchrotron-based Laue microdiffraction has been widely applied to characterize the local crystal structure, orientation, and defects of inhomogeneous polycrystalline solids by raster scanning them under a micro/nano focused polychromatic X-ray probe. In a typical experiment, a large number of Laue diffraction patterns are collected, requiring novel data reduction and analysis approaches, especially for researchers who do not have access to fast parallel computing capabilities. In this article, a novel approach is developed by plotting the distributions of the average recorded intensity and the average filtered intensity of the Laue patterns. Visualization of the characteristic microstructural features is realized in real time during data collection. As an example, this method is applied to image key features such as microcracks, carbides, heat affected zone, and dendrites in a laser assisted 3D printed Ni-based superalloy, at a speed much faster than data collection. Such analytical approach remains valid for a wide range of crystalline solids, and therefore extends the application range of the Laue microdiffraction technique to problems where real-time decision-making during experiment is crucial (for instance time-resolved non-reversible experiments).

  1. Real-time microstructure imaging by Laue microdiffraction: A sample application in laser 3D printed Ni-based superalloys

    PubMed Central

    Zhou, Guangni; Zhu, Wenxin; Shen, Hao; Li, Yao; Zhang, Anfeng; Tamura, Nobumichi; Chen, Kai

    2016-01-01

    Synchrotron-based Laue microdiffraction has been widely applied to characterize the local crystal structure, orientation, and defects of inhomogeneous polycrystalline solids by raster scanning them under a micro/nano focused polychromatic X-ray probe. In a typical experiment, a large number of Laue diffraction patterns are collected, requiring novel data reduction and analysis approaches, especially for researchers who do not have access to fast parallel computing capabilities. In this article, a novel approach is developed by plotting the distributions of the average recorded intensity and the average filtered intensity of the Laue patterns. Visualization of the characteristic microstructural features is realized in real time during data collection. As an example, this method is applied to image key features such as microcracks, carbides, heat affected zone, and dendrites in a laser assisted 3D printed Ni-based superalloy, at a speed much faster than data collection. Such analytical approach remains valid for a wide range of crystalline solids, and therefore extends the application range of the Laue microdiffraction technique to problems where real-time decision-making during experiment is crucial (for instance time-resolved non-reversible experiments). PMID:27302087

  2. Real-time microstructure imaging by Laue microdiffraction: A sample application in laser 3D printed Ni-based superalloys.

    PubMed

    Zhou, Guangni; Zhu, Wenxin; Shen, Hao; Li, Yao; Zhang, Anfeng; Tamura, Nobumichi; Chen, Kai

    2016-01-01

    Synchrotron-based Laue microdiffraction has been widely applied to characterize the local crystal structure, orientation, and defects of inhomogeneous polycrystalline solids by raster scanning them under a micro/nano focused polychromatic X-ray probe. In a typical experiment, a large number of Laue diffraction patterns are collected, requiring novel data reduction and analysis approaches, especially for researchers who do not have access to fast parallel computing capabilities. In this article, a novel approach is developed by plotting the distributions of the average recorded intensity and the average filtered intensity of the Laue patterns. Visualization of the characteristic microstructural features is realized in real time during data collection. As an example, this method is applied to image key features such as microcracks, carbides, heat affected zone, and dendrites in a laser assisted 3D printed Ni-based superalloy, at a speed much faster than data collection. Such analytical approach remains valid for a wide range of crystalline solids, and therefore extends the application range of the Laue microdiffraction technique to problems where real-time decision-making during experiment is crucial (for instance time-resolved non-reversible experiments). PMID:27302087

  3. Continuous section extraction and over-underbreak detection of tunnel based on 3D laser technology and image analysis

    NASA Astrophysics Data System (ADS)

    Wang, Weixing; Wang, Zhiwei; Han, Ya; Li, Shuang; Zhang, Xin

    2015-03-01

    Over Underbreak detection of road and solve the problemof the roadway data collection difficulties, this paper presents a new method of continuous section extraction and Over Underbreak detection of road based on 3D laser scanning technology and image processing, the method is divided into the following three steps: based on Canny edge detection, local axis fitting, continuous extraction section and Over Underbreak detection of section. First, after Canny edge detection, take the least-squares curve fitting method to achieve partial fitting in axis. Then adjust the attitude of local roadway that makes the axis of the roadway be consistent with the direction of the extraction reference, and extract section along the reference direction. Finally, we compare the actual cross-sectional view and the cross-sectional design to complete Overbreak detected. Experimental results show that the proposed method have a great advantage in computing costs and ensure cross-section orthogonal intercept terms compared with traditional detection methods.

  4. Metrological characterization of 3D imaging devices

    NASA Astrophysics Data System (ADS)

    Guidi, G.

    2013-04-01

    Manufacturers often express the performance of a 3D imaging device in various non-uniform ways for the lack of internationally recognized standard requirements for metrological parameters able to identify the capability of capturing a real scene. For this reason several national and international organizations in the last ten years have been developing protocols for verifying such performance. Ranging from VDI/VDE 2634, published by the Association of German Engineers and oriented to the world of mechanical 3D measurements (triangulation-based devices), to the ASTM technical committee E57, working also on laser systems based on direct range detection (TOF, Phase Shift, FM-CW, flash LADAR), this paper shows the state of the art about the characterization of active range devices, with special emphasis on measurement uncertainty, accuracy and resolution. Most of these protocols are based on special objects whose shape and size are certified with a known level of accuracy. By capturing the 3D shape of such objects with a range device, a comparison between the measured points and the theoretical shape they should represent is possible. The actual deviations can be directly analyzed or some derived parameters can be obtained (e.g. angles between planes, distances between barycenters of spheres rigidly connected, frequency domain parameters, etc.). This paper shows theoretical aspects and experimental results of some novel characterization methods applied to different categories of active 3D imaging devices based on both principles of triangulation and direct range detection.

  5. Laser gated viewing at ISL for vision through smoke, active polarimetry, and 3D imaging in NIR and SWIR wavelength bands

    NASA Astrophysics Data System (ADS)

    Laurenzis, Martin; Christnacher, Frank

    2013-12-01

    In this article, we want to give a review on the application of laser gated viewing for the improvement of vision cross-diffusing obstacles (smoke, turbid medium, …), the capturing of 3D scene information, or the study of material properties by polarimetric analysis at near-infrared (NIR) and shortwave-infrared (SWIR) wavelengths. Laser gated viewing has been studied since the 1960s as an active night vision method. Owing to enormous improvements in the development of compact and highly efficient laser sources and in the development of modern sensor technologies, the maturity of demonstrator systems rose during the past decades. Further, it was demonstrated that laser gated viewing has versatile sensing capabilities with application for long-range observation under certain degraded weather conditions, vision through obstacles and fog, active polarimetry, and 3D imaging.

  6. Characterizing targets and backgrounds for 3D laser radars

    NASA Astrophysics Data System (ADS)

    Steinvall, Ove K.; Larsson, Hakan; Gustafsson, Frank; Chevalier, Tomas R.; Persson, Asa; Klasen, Lena M.

    2004-12-01

    Exciting development is taking place in 3 D sensing laser radars. Scanning systems are well established for mapping from airborne and ground sensors. 3 D sensing focal plane arrays (FPAs) enable a full range and intensity image can be captured in one laser shot. Gated viewing systems also produces 3 D target information. Many applications for 3 D laser radars are found in robotics, rapid terrain visualization, augmented vision, reconnaissance and target recognition, weapon guidance including aim point selection and others. The net centric warfare will demand high resolution geo-data for a common description of the environment. At FOI we have a measurement program to collect data relevant for 3 D laser radars using airborne and tripod mounted equipment for data collection. Data collection spans from single pixel waveform collection (1 D) over 2 D using range gated imaging to full 3 D imaging using scanning systems. This paper will describe 3 D laser data from different campaigns with emphasis on range distribution and reflections properties for targets and background during different seasonal conditions. Example of the use of the data for system modeling, performance prediction and algorithm development will be given. Different metrics to characterize the data set will also be discussed.

  7. Full 3D microwave quasi-holographic imaging

    NASA Astrophysics Data System (ADS)

    Castelli, Juan-Carlos; Tardivel, Francois

    A full 3D quasi-holographic image processing technique developed by ONERA is described. A complex backscattering coefficient of a drone scale model was measured for discrete values of the 3D backscattered wave vector in a frequency range between 4.5-8 GHz. The 3D image processing is implemented on a HP 1000 mini-computer and will be part of LASER 2 software to be used in three RCS measurement indoor facilities.

  8. Accuracy in Quantitative 3D Image Analysis

    PubMed Central

    Bassel, George W.

    2015-01-01

    Quantitative 3D imaging is becoming an increasingly popular and powerful approach to investigate plant growth and development. With the increased use of 3D image analysis, standards to ensure the accuracy and reproducibility of these data are required. This commentary highlights how image acquisition and postprocessing can introduce artifacts into 3D image data and proposes steps to increase both the accuracy and reproducibility of these analyses. It is intended to aid researchers entering the field of 3D image processing of plant cells and tissues and to help general readers in understanding and evaluating such data. PMID:25804539

  9. Laser printing of 3D metallic interconnects

    NASA Astrophysics Data System (ADS)

    Beniam, Iyoel; Mathews, Scott A.; Charipar, Nicholas A.; Auyeung, Raymond C. Y.; Piqué, Alberto

    2016-04-01

    The use of laser-induced forward transfer (LIFT) techniques for the printing of functional materials has been demonstrated for numerous applications. The printing gives rise to patterns, which can be used to fabricate planar interconnects. More recently, various groups have demonstrated electrical interconnects from laser-printed 3D structures. The laser printing of these interconnects takes place through aggregation of voxels of either molten metal or of pastes containing dispersed metallic particles. However, the generated 3D structures do not posses the same metallic conductivity as a bulk metal interconnect of the same cross-section and length as those formed by wire bonding or tab welding. An alternative is to laser transfer entire 3D structures using a technique known as lase-and-place. Lase-and-place is a LIFT process whereby whole components and parts can be transferred from a donor substrate onto a desired location with one single laser pulse. This paper will describe the use of LIFT to laser print freestanding, solid metal foils or beams precisely over the contact pads of discrete devices to interconnect them into fully functional circuits. Furthermore, this paper will also show how the same laser can be used to bend or fold the bulk metal foils prior to transfer, thus forming compliant 3D structures able to provide strain relief for the circuits under flexing or during motion from thermal mismatch. These interconnect "ridges" can span wide gaps (on the order of a millimeter) and accommodate height differences of tens of microns between adjacent devices. Examples of these laser printed 3D metallic bridges and their role in the development of next generation electronics by additive manufacturing will be presented.

  10. Teat Morphology Characterization With 3D Imaging.

    PubMed

    Vesterinen, Heidi M; Corfe, Ian J; Sinkkonen, Ville; Iivanainen, Antti; Jernvall, Jukka; Laakkonen, Juha

    2015-07-01

    The objective of this study was to visualize, in a novel way, the morphological characteristics of bovine teats to gain a better understanding of the detailed teat morphology. We applied silicone casting and 3D digital imaging in order to obtain a more detailed image of the teat structures than that seen in previous studies. Teat samples from 65 dairy cows over 12 months of age were obtained from cows slaughtered at an abattoir. The teats were classified according to the teat condition scoring used in Finland and the lengths of the teat canals were measured. Silicone molds were made from the external teat surface surrounding the teat orifice and from the internal surface of the teat consisting of the papillary duct, Fürstenberg's rosette, and distal part of the teat cistern. The external and internal surface molds of 35 cows were scanned with a 3D laser scanner. The molds and the digital 3D models were used to evaluate internal and external teat surface morphology. A number of measurements were taken from the silicone molds. The 3D models reproduced the morphology of the teats accurately with high repeatability. Breed didn't correlate with the teat classification score. The rosette was found to have significant variation in its size and number of mucosal folds. The internal surface morphology of the rosette did not correlate with the external surface morphology of the teat implying that it is relatively independent of milking parameters that may impact the teat canal and the external surface of the teat. PMID:25382725

  11. Imaging a Sustainable Future in 3D

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  12. Laser 3D micro-manufacturing

    NASA Astrophysics Data System (ADS)

    Piqué, Alberto; Auyeung, Raymond C. Y.; Kim, Heungsoo; Charipar, Nicholas A.; Mathews, Scott A.

    2016-06-01

    Laser-based materials processing techniques are gaining widespread use in micro-manufacturing applications. The use of laser microfabrication techniques enables the processing of micro- and nanostructures from a wide range of materials and geometries without the need for masking and etching steps commonly associated with photolithography. This review aims to describe the broad applications space covered by laser-based micro- and nanoprocessing techniques and the benefits offered by the use of lasers in micro-manufacturing processes. Given their non-lithographic nature, these processes are also referred to as laser direct-write and constitute some of the earliest demonstrations of 3D printing or additive manufacturing at the microscale. As this review will show, the use of lasers enables precise control of the various types of processing steps—from subtractive to additive—over a wide range of scales with an extensive materials palette. Overall, laser-based direct-write techniques offer multiple modes of operation including the removal (via ablative processes) and addition (via photopolymerization or printing) of most classes of materials using the same equipment in many cases. The versatility provided by these multi-function, multi-material and multi-scale laser micro-manufacturing processes cannot be matched by photolithography nor with other direct-write microfabrication techniques and offer unique opportunities for current and future 3D micro-manufacturing applications.

  13. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  14. Performance assessment of simulated 3D laser images using Geiger-mode avalanche photo-diode: tests on simple synthetic scenarios

    NASA Astrophysics Data System (ADS)

    Coyac, Antoine; Hespel, Laurent; Riviere, Nicolas; Briottet, Xavier

    2015-10-01

    In the past few decades, laser imaging has demonstrated its potential in delivering accurate range images of objects or scenes, even at long range or under bad weather conditions (rain, fog, day and night vision). We note great improvements in the conception and development of single and multi infrared sensors, concerning embedability, circuitry reading capacity, or pixel resolution and sensitivity, allowing a wide diversity of applications (i.e. enhanced vision, long distance target detection and reconnaissance, 3D DSM generation). Unfortunately, it is often difficult to dispose of all the instruments to compare their performance for a given application. Laser imaging simulation has shown to be an interesting alternative to acquire real data, offering a higher flexibility to perform this sensors comparison, plus being time and cost efficient. In this paper, we present a 3D laser imaging end-to-end simulator using a focal plane array with Geiger mode detection, named LANGDOC. This work aims to highlight the interest and capability of this new generation of photo-diodes arrays, especially for airborne mapping and surveillance of high risk areas.

  15. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  16. 3D Imaging of Porous Media Using Laser Scanning Confocal Microscopy with Application to Microscale Transport Processes

    SciTech Connect

    Fredrich, J.T.

    1999-02-10

    We present advances in the application of laser scanning confocal microscopy (LSCM) to image, reconstruct, and characterize statistically the microgeometry of porous geologic and engineering materials. We discuss technical and practical aspects of this imaging technique, including both its advantages and limitations. Confocal imaging can be used to optically section a material, with sub-micron resolution possible in the lateral and axial planes. The resultant volumetric image data, consisting of fluorescence intensities for typically {approximately}50 million voxels in XYZ space, can be used to reconstruct the three-dimensional structure of the two-phase medium. We present several examples of this application, including studying pore geometry in sandstone, characterizing brittle failure processes in low-porosity rock deformed under triaxial loading conditions in the laboratory, and analyzing the microstructure of porous ceramic insulations. We then describe approaches to extract statistical microgeometric descriptions from volumetric image data, and present results derived from confocal volumetric data sets. Finally, we develop the use of confocal image data to automatically generate a three-dimensional mesh for numerical pore-scale flow simulations.

  17. 3D digital image processing for biofilm quantification from confocal laser scanning microscopy: Multidimensional statistical analysis of biofilm modeling

    NASA Astrophysics Data System (ADS)

    Zielinski, Jerzy S.

    The dramatic increase in number and volume of digital images produced in medical diagnostics, and the escalating demand for rapid access to these relevant medical data, along with the need for interpretation and retrieval has become of paramount importance to a modern healthcare system. Therefore, there is an ever growing need for processed, interpreted and saved images of various types. Due to the high cost and unreliability of human-dependent image analysis, it is necessary to develop an automated method for feature extraction, using sophisticated mathematical algorithms and reasoning. This work is focused on digital image signal processing of biological and biomedical data in one- two- and three-dimensional space. Methods and algorithms presented in this work were used to acquire data from genomic sequences, breast cancer, and biofilm images. One-dimensional analysis was applied to DNA sequences which were presented as a non-stationary sequence and modeled by a time-dependent autoregressive moving average (TD-ARMA) model. Two-dimensional analyses used 2D-ARMA model and applied it to detect breast cancer from x-ray mammograms or ultrasound images. Three-dimensional detection and classification techniques were applied to biofilm images acquired using confocal laser scanning microscopy. Modern medical images are geometrically arranged arrays of data. The broadening scope of imaging as a way to organize our observations of the biophysical world has led to a dramatic increase in our ability to apply new processing techniques and to combine multiple channels of data into sophisticated and complex mathematical models of physiological function and dysfunction. With explosion of the amount of data produced in a field of biomedicine, it is crucial to be able to construct accurate mathematical models of the data at hand. Two main purposes of signal modeling are: data size conservation and parameter extraction. Specifically, in biomedical imaging we have four key problems

  18. Evaluation of 3D imaging.

    PubMed

    Vannier, M W

    2000-10-01

    Interactive computer-based simulation is gaining acceptance for craniofacial surgical planning. Subjective visualization without objective measurement capability, however, severely limits the value of simulation since spatial accuracy must be maintained. This study investigated the error sources involved in one method of surgical simulation evaluation. Linear and angular measurement errors were found to be within +/- 1 mm and 1 degree. Surface match of scanned objects was slightly less accurate, with errors up to 3 voxels and 4 degrees, and Boolean subtraction methods were 93 to 99% accurate. Once validated, these testing methods were applied to objectively compare craniofacial surgical simulations to post-operative outcomes, and verified that the form of simulation used in this study yields accurate depictions of surgical outcome. However, to fully evaluate surgical simulation, future work is still required to test the new methods in sufficient numbers of patients to achieve statistically significant results. Once completely validated, simulation cannot only be used in pre-operative surgical planning, but also as a post-operative descriptor of surgical and traumatic physical changes. Validated image comparison methods can also show discrepancy of surgical outcome to surgical plan, thus allowing evaluation of surgical technique. PMID:11098409

  19. Investigation and visualization of scleral channels created with femtosecond laser in enucleated human eyes using 3D optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Chaudhary, Gautam; Rao, Bin; Chai, Dongyul; Chen, Zhongping; Juhasz, Tibor

    2007-02-01

    We used optical coherence tomography (OCT) for non-invasive imaging of the anterior segment of the eye for investigating partial-thickness scleral channels created with a femtosecond laser. Glaucoma is associated with elevated intraocular pressure (IOP) due to reduced outflow facility in the eye. A partial-thickness aqueous humor (AH) drainage channel in the sclera was created with 1.7-μm wavelength femtosecond laser pulses to reduce IOP by increasing the outflow facility, as a solution to retard the progression of glaucoma. It is hypothesized that the precise dimensions and predetermined location of the channel would provide a controlled increase of the outflow rate resulting in IOP reduction. Therefore, it is significant to create the channel at the exact location with predefined dimensions. The aim of this research has two aspects. First, as the drainage channel is subsurface, it is a challenging task to determine its precise location, shape and dimensions, and it becomes very important to investigate the channel attributes after the laser treatment without disturbing the internal anterior structures. Second, to provide a non-invasive, image-based verification that extremely accurate and non-scarring AH drainage channel can be created with femtosecond laser. Partial-thickness scleral channels created in five human cadaver eyes were investigated non-invasively with a 1310-nm time-domain OCT imaging system. Three-dimensional (3D) OCT image stacks of the triangular cornea-sclera junction, also known as anterior chamber angle, were acquired for image-based analysis and visualization. The volumetric cutting-plane approach allowed reconstruction of images at any cross-sectional position in the entire 3D volume of tissue, making it a valuable tool for exploring and evaluating the location, shape and dimension of the channel from all directions. As a two-dimensional image-based methodology, an image-processing pipeline was implemented to enhance the channel features to

  20. 3D holoscopic video imaging system

    NASA Astrophysics Data System (ADS)

    Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher

    2012-03-01

    Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.

  1. 3D Cell Culture Imaging with Digital Holographic Microscopy

    NASA Astrophysics Data System (ADS)

    Dimiduk, Thomas; Nyberg, Kendra; Almeda, Dariela; Koshelva, Ekaterina; McGorty, Ryan; Kaz, David; Gardel, Emily; Auguste, Debra; Manoharan, Vinothan

    2011-03-01

    Cells in higher organisms naturally exist in a three dimensional (3D) structure, a fact sometimes ignored by in vitro biological research. Confinement to a two dimensional culture imposes significant deviations from the native 3D state. One of the biggest obstacles to wider use of 3D cultures is the difficulty of 3D imaging. The confocal microscope, the dominant 3D imaging instrument, is expensive, bulky, and light-intensive; live cells can be observed for only a short time before they suffer photodamage. We present an alternative 3D imaging techinque, digital holographic microscopy, which can capture 3D information with axial resolution better than 2 μm in a 100 μm deep volume. Capturing a 3D image requires only a single camera exposure with a sub-millisecond laser pulse, allowing us to image cell cultures using five orders of magnitude less light energy than with confocal. This can be done with hardware costing ~ 1000. We use the instrument to image growth of MCF7 breast cancer cells and p. pastoras yeast. We acknowledge support from NSF GRFP.

  2. Terrestrial laser scanning point clouds time series for the monitoring of slope movements: displacement measurement using image correlation and 3D feature tracking

    NASA Astrophysics Data System (ADS)

    Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti

    2016-04-01

    Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the

  3. Miniaturized 3D microscope imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  4. Structured light field 3D imaging.

    PubMed

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Yin, Yongkai; Li, Ameng; Wu, Jiachen; Gao, Bruce Z

    2016-09-01

    In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces. PMID:27607639

  5. 3D EIT image reconstruction with GREIT.

    PubMed

    Grychtol, Bartłomiej; Müller, Beat; Adler, Andy

    2016-06-01

    Most applications of thoracic EIT use a single plane of electrodes on the chest from which a transverse image 'slice' is calculated. However, interpretation of EIT images is made difficult by the large region above and below the electrode plane to which EIT is sensitive. Volumetric EIT images using two (or more) electrode planes should help compensate, but are little used currently. The Graz consensus reconstruction algorithm for EIT (GREIT) has become popular in lung EIT. One shortcoming of the original formulation of GREIT is its restriction to reconstruction onto a 2D planar image. We present an extension of the GREIT algorithm to 3D and develop open-source tools to evaluate its performance as a function of the choice of stimulation and measurement pattern. Results show 3D GREIT using two electrode layers has significantly more uniform sensitivity profiles through the chest region. Overall, the advantages of 3D EIT are compelling. PMID:27203184

  6. Miniaturized laser illumination module for 3D areal mapper

    NASA Astrophysics Data System (ADS)

    Gaynor, Edwin S.; Blase, W. Paul; Woodward, Kim G.

    1998-01-01

    We report progress towards a miniaturized laser illumination module (LIM) for illuminating objects with structured light for 3D imaging purposes. The module, when combined with an off-axis camera and a PC, will image volumes in near-real- time at a range-dependent resolution using 256 X 256 resolution elements. The miniaturized LIM comprises a red laser diode source, a hologram, a spatial light modulator and a projection lens.We present optical and electronic design features of the device in terms of constraints on size and manufacturability. The miniature LIM can be applied to diverse 3D imaging problems to include industrial reverse engineering and inspection and medical diagnostics and prosthetics design.

  7. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  8. 3D camera tracking from disparity images

    NASA Astrophysics Data System (ADS)

    Kim, Kiyoung; Woo, Woontack

    2005-07-01

    In this paper, we propose a robust camera tracking method that uses disparity images computed from known parameters of 3D camera and multiple epipolar constraints. We assume that baselines between lenses in 3D camera and intrinsic parameters are known. The proposed method reduces camera motion uncertainty encountered during camera tracking. Specifically, we first obtain corresponding feature points between initial lenses using normalized correlation method. In conjunction with matching features, we get disparity images. When the camera moves, the corresponding feature points, obtained from each lens of 3D camera, are robustly tracked via Kanade-Lukas-Tomasi (KLT) tracking algorithm. Secondly, relative pose parameters of each lens are calculated via Essential matrices. Essential matrices are computed from Fundamental matrix calculated using normalized 8-point algorithm with RANSAC scheme. Then, we determine scale factor of translation matrix by d-motion. This is required because the camera motion obtained from Essential matrix is up to scale. Finally, we optimize camera motion using multiple epipolar constraints between lenses and d-motion constraints computed from disparity images. The proposed method can be widely adopted in Augmented Reality (AR) applications, 3D reconstruction using 3D camera, and fine surveillance systems which not only need depth information, but also camera motion parameters in real-time.

  9. High definition 3D ultrasound imaging.

    PubMed

    Morimoto, A K; Krumm, J C; Kozlowski, D M; Kuhlmann, J L; Wilson, C; Little, C; Dickey, F M; Kwok, K S; Rogers, B; Walsh, N

    1997-01-01

    We have demonstrated high definition and improved resolution using a novel scanning system integrated with a commercial ultrasound machine. The result is a volumetric 3D ultrasound data set that can be visualized using standard techniques. Unlike other 3D ultrasound images, image quality is improved from standard 2D data. Image definition and bandwidth is improved using patent pending techniques. The system can be used to image patients or wounded soldiers for general imaging of anatomy such as abdominal organs, extremities, and the neck. Although the risks associated with x-ray carcinogenesis are relatively low at diagnostic dose levels, concerns remain for individuals in high risk categories. In addition, cost and portability of CT and MRI machines can be prohibitive. In comparison, ultrasound can provide portable, low-cost, non-ionizing imaging. Previous clinical trials comparing ultrasound to CT were used to demonstrate qualitative and quantitative improvements of ultrasound using the Sandia technologies. Transverse leg images demonstrated much higher clarity and lower noise than is seen in traditional ultrasound images. An x-ray CT scan was provided of the same cross-section for comparison. The results of our most recent trials demonstrate the advantages of 3D ultrasound and motion compensation compared with 2D ultrasound. Metal objects can also be observed within the anatomy. PMID:10168958

  10. Low Dose, Low Energy 3d Image Guidance during Radiotherapy

    NASA Astrophysics Data System (ADS)

    Moore, C. J.; Marchant, T.; Amer, A.; Sharrock, P.; Price, P.; Burton, D.

    2006-04-01

    Patient kilo-voltage X-ray cone beam volumetric imaging for radiotherapy was first demonstrated on an Elekta Synergy mega-voltage X-ray linear accelerator. Subsequently low dose, reduced profile reconstruction imaging was shown to be practical for 3D geometric setup registration to pre-treatment planning images without compromising registration accuracy. Reconstruction from X-ray profiles gathered between treatment beam deliveries was also introduced. The innovation of zonal cone beam imaging promises significantly reduced doses to patients and improved soft tissue contrast in the tumour target zone. These developments coincided with the first dynamic 3D monitoring of continuous body topology changes in patients, at the moment of irradiation, using a laser interferometer. They signal the arrival of low dose, low energy 3D image guidance during radiotherapy itself.

  11. Walker Ranch 3D seismic images

    DOE Data Explorer

    Robert J. Mellors

    2016-03-01

    Amplitude images (both vertical and depth slices) extracted from 3D seismic reflection survey over area of Walker Ranch area (adjacent to Raft River). Crossline spacing of 660 feet and inline of 165 feet using a Vibroseis source. Processing included depth migration. Micro-earthquake hypocenters on images. Stratigraphic information and nearby well tracks added to images. Images are embedded in a Microsoft Word document with additional information. Exact location and depth restricted for proprietary reasons. Data collection and processing funded by Agua Caliente. Original data remains property of Agua Caliente.

  12. Backhoe 3D "gold standard" image

    NASA Astrophysics Data System (ADS)

    Gorham, LeRoy; Naidu, Kiranmai D.; Majumder, Uttam; Minardi, Michael A.

    2005-05-01

    ViSUAl-D (VIsual Sar Using ALl Dimensions), a 2004 DARPA/IXO seedling effort, is developing a capability for reliable high confidence ID from standoff ranges. Recent conflicts have demonstrated that the warfighter would greatly benefit from the ability to ID targets beyond visual and electro-optical ranges[1]. Forming optical-quality SAR images while exploiting full polarization, wide angles, and large bandwidth would be key evidence such a capability is achievable. Using data generated by the Xpatch EM scattering code, ViSUAl-D investigates all degrees of freedom available to the radar designer, including 6 GHz bandwidth, full polarization and angle sampling over 2π steradians (upper hemisphere), in order to produce a "literal" image or representation of the target. This effort includes the generation of a "Gold Standard" image that can be produced at X-band utilizing all available target data. This "Gold Standard" image of the backhoe will serve as a test bed for future more relevant military targets and their image development. The seedling team produced a public release data which was released at the 2004 SPIE conference, as well as a 3D "Gold Standard" backhoe image using a 3D image formation algorithm. This paper describes the full backhoe data set, the image formation algorithm, the visualization process and the resulting image.

  13. 3D MR imaging in real time

    NASA Astrophysics Data System (ADS)

    Guttman, Michael A.; McVeigh, Elliot R.

    2001-05-01

    A system has been developed to produce live 3D volume renderings from an MR scanner. Whereas real-time 2D MR imaging has been demonstrated by several groups, 3D volumes are currently rendered off-line to gain greater understanding of anatomical structures. For example, surgical planning is sometimes performed by viewing 2D images or 3D renderings from previously acquired image data. A disadvantage of this approach is misregistration which could occur if the anatomy changes due to normal muscle contractions or surgical manipulation. The ability to produce volume renderings in real-time and present them in the magnet room could eliminate this problem, and enable or benefit other types of interventional procedures. The system uses the data stream generated by a fast 2D multi- slice pulse sequence to update a volume rendering immediately after a new slice is available. We demonstrate some basic types of user interaction with the rendering during imaging at a rate of up to 20 frames per second.

  14. Documenting a Complex Modern Heritage Building Using Multi Image Close Range Photogrammetry and 3d Laser Scanned Point Clouds

    NASA Astrophysics Data System (ADS)

    Vianna Baptista, M. L.

    2013-07-01

    Integrating different technologies and expertises help fill gaps when optimizing documentation of complex buildings. Described below is the process used in the first part of a restoration project, the architectural survey of Theatre Guaira Cultural Centre in Curitiba, Brazil. To diminish time on fieldwork, the two-person-field-survey team had to juggle, during three days, the continuous artistic activities and performers' intense schedule. Both technologies (high definition laser scanning and close-range photogrammetry) were used to record all details in the least amount of time without disturbing the artists' rehearsals and performances. Laser Scanning was ideal to record the monumental stage structure with all of its existing platforms, light fixtures, scenery walls and curtains. Although scanned with high-definition, parts of the exterior façades were also recorded using Close Range Photogrammetry. Tiny cracks on the marble plaques and mosaic tiles, not visible in the point clouds, were then able to be precisely documented in order to create the exterior façades textures and damages mapping drawings. The combination of technologies and the expertise of service providers, knowing how and what to document, and what to deliver to the client, enabled maximum benefits to the following restoration project.

  15. Geomatics for precise 3D breast imaging.

    PubMed

    Alto, Hilary

    2005-02-01

    Canadian women have a one in nine chance of developing breast cancer during their lifetime. Mammography is the most common imaging technology used for breast cancer detection in its earliest stages through screening programs. Clusters of microcalcifications are primary indicators of breast cancer; the shape, size and number may be used to determine whether they are malignant or benign. However, overlapping images of calcifications on a mammogram hinder the classification of the shape and size of each calcification and a misdiagnosis may occur resulting in either an unnecessary biopsy being performed or a necessary biopsy not being performed. The introduction of 3D imaging techniques such as standard photogrammetry may increase the confidence of the radiologist when making his/her diagnosis. In this paper, traditional analytical photogrammetric techniques for the 3D mathematical reconstruction of microcalcifications are presented. The techniques are applied to a specially designed and constructed x-ray transparent Plexiglas phantom (control object). The phantom was embedded with 1.0 mm x-ray opaque lead pellets configured to represent overlapping microcalcifications. Control points on the phantom were determined by standard survey methods and hand measurements. X-ray films were obtained using a LORAD M-III mammography machine. The photogrammetric techniques of relative and absolute orientation were applied to the 2D mammographic films to analytically generate a 3D depth map with an overall accuracy of 0.6 mm. A Bundle Adjustment and the Direct Linear Transform were used to confirm the results. PMID:15649085

  16. Pattern based 3D image Steganography

    NASA Astrophysics Data System (ADS)

    Thiyagarajan, P.; Natarajan, V.; Aghila, G.; Prasanna Venkatesan, V.; Anitha, R.

    2013-03-01

    This paper proposes a new high capacity Steganographic scheme using 3D geometric models. The novel algorithm re-triangulates a part of a triangle mesh and embeds the secret information into newly added position of triangle meshes. Up to nine bits of secret data can be embedded into vertices of a triangle without causing any changes in the visual quality and the geometric properties of the cover image. Experimental results show that the proposed algorithm is secure, with high capacity and low distortion rate. Our algorithm also resists against uniform affine transformations such as cropping, rotation and scaling. Also, the performance of the method is compared with other existing 3D Steganography algorithms. [Figure not available: see fulltext.

  17. 3D sensor for indirect ranging with pulsed laser source

    NASA Astrophysics Data System (ADS)

    Bronzi, D.; Bellisai, S.; Villa, F.; Scarcella, C.; Bahgat Shehata, A.; Tosi, A.; Padovini, G.; Zappa, F.; Tisa, S.; Durini, D.; Weyers, S.; Brockherde, W.

    2012-10-01

    The growing interest for fast, compact and cost-effective 3D ranging imagers for automotive applications has prompted to explore many different techniques for 3D imaging and to develop new system for this propose. CMOS imagers that exploit phase-resolved techniques provide accurate 3D ranging with no complex optics and are rugged and costeffective. Phase-resolved techniques indirectly measure the round-trip return of the light emitted by a laser and backscattered from a distant target, computing the phase delay between the modulated light and the detected signal. Singlephoton detectors, with their high sensitivity, allow to actively illuminate the scene with a low power excitation (less than 10W with diffused daylight illumination). We report on a 4x4 array of CMOS SPAD (Single Photon Avalanche Diodes) designed in a high-voltage 0.35 μm CMOS technology, for pulsed modulation, in which each pixel computes the phase difference between the laser and the reflected pulse. Each pixel comprises a high-performance 30 μm diameter SPAD, an analog quenching circuit, two 9 bit up-down counters and memories to store data during the readout. The first counter counts the photons detected by the SPAD in a time window synchronous with the laser pulse and integrates the whole echoed signal. The second counter accumulates the number of photon detected in a window shifted with respect to the laser pulse, and acquires only a portion of the reflected signal. The array is readout with a global shutter architecture, using a 100 MHz clock; the maximal frame rate is 3 Mframe/s.

  18. 3D seismic image processing for interpretation

    NASA Astrophysics Data System (ADS)

    Wu, Xinming

    Extracting fault, unconformity, and horizon surfaces from a seismic image is useful for interpretation of geologic structures and stratigraphic features. Although interpretation of these surfaces has been automated to some extent by others, significant manual effort is still required for extracting each type of these geologic surfaces. I propose methods to automatically extract all the fault, unconformity, and horizon surfaces from a 3D seismic image. To a large degree, these methods just involve image processing or array processing which is achieved by efficiently solving partial differential equations. For fault interpretation, I propose a linked data structure, which is simpler than triangle or quad meshes, to represent a fault surface. In this simple data structure, each sample of a fault corresponds to exactly one image sample. Using this linked data structure, I extract complete and intersecting fault surfaces without holes from 3D seismic images. I use the same structure in subsequent processing to estimate fault slip vectors. I further propose two methods, using precomputed fault surfaces and slips, to undo faulting in seismic images by simultaneously moving fault blocks and faults themselves. For unconformity interpretation, I first propose a new method to compute a unconformity likelihood image that highlights both the termination areas and the corresponding parallel unconformities and correlative conformities. I then extract unconformity surfaces from the likelihood image and use these surfaces as constraints to more accurately estimate seismic normal vectors that are discontinuous near the unconformities. Finally, I use the estimated normal vectors and use the unconformities as constraints to compute a flattened image, in which seismic reflectors are all flat and vertical gaps correspond to the unconformities. Horizon extraction is straightforward after computing a map of image flattening; we can first extract horizontal slices in the flattened space

  19. 3D GPR Imaging of Wooden Logs

    NASA Astrophysics Data System (ADS)

    Halabe, Udaya B.; Pyakurel, Sandeep

    2007-03-01

    There has been a lack of an effective NDE technique to locate internal defects within wooden logs. The few available elastic wave propagation based techniques are limited to predicting E values. Other techniques such as X-rays have not been very successful in detecting internal defects in logs. If defects such as embedded metals could be identified before the sawing process, the saw mills could significantly increase their production by reducing the probability of damage to the saw blade and the associated downtime and the repair cost. Also, if the internal defects such as knots and decayed areas could be identified in logs, the sawing blade can be oriented to exclude the defective portion and optimize the volume of high valued lumber that can be obtained from the logs. In this research, GPR has been successfully used to locate internal defects (knots, decays and embedded metals) within the logs. This paper discusses GPR imaging and mapping of the internal defects using both 2D and 3D interpretation methodology. Metal pieces were inserted in a log and the reflection patterns from these metals were interpreted from the radargrams acquired using 900 MHz antenna. Also, GPR was able to accurately identify the location of knots and decays. Scans from several orientations of the log were collected to generate 3D cylindrical volume. The actual location of the defects showed good correlation with the interpreted defects in the 3D volume. The time/depth slices from 3D cylindrical volume data were useful in understanding the extent of defects inside the log.

  20. Image-Based 3d Modeling VS Laser Scanning for the Analysis of Medieval Architecture: the Case of ST. Croce Church in Bergamo

    NASA Astrophysics Data System (ADS)

    Cardaci, A.; Versaci, A.

    2013-07-01

    The Church of St. Croce in Bergamo (second half of the 11th century), is a small four-sided building consisting of two overlapping volumes located in the courtyard adjacent to the Bishop's Palace. In the last years, archaeological excavations have unearthed parts of the edifice, until that time hidden because buried during the construction of the Basilica of Santa Maria Maggiore and now restored its original form. Due to the recent discoveries, a critical review of all the existing documentation in order to clarify the relationship of the various building components has been considered necessary. A quick, well-timed, chromatically characterized and accurate survey aimed at the complete digital reconstruction of this interesting example of medieval Italian architecture was then needed. This has suggested simultaneously testing two of the most innovative technologies: the 3D laser scanning survey ensuring high-resolution and complete models within a short time, and the photogrammetric automatic image-based modelling, allowing a three-dimensional reconstruction of the architectural objects. This paper intends to show the results achieved by the analytical comparison between the two methodologies, and thus analyse their differences, the advantages and the deficiencies of both of them and the opportunities for future enhancements and developments.

  1. 3-D SAR image formation from sparse aperture data using 3-D target grids

    NASA Astrophysics Data System (ADS)

    Bhalla, Rajan; Li, Junfei; Ling, Hao

    2005-05-01

    The performance of ATR systems can potentially be improved by using three-dimensional (3-D) SAR images instead of the traditional two-dimensional SAR images or one-dimensional range profiles. 3-D SAR image formation of targets from radar backscattered data collected on wide angle, sparse apertures has been identified by AFRL as fundamental to building an object detection and recognition capability. A set of data has been released as a challenge problem. This paper describes a technique based on the concept of 3-D target grids aimed at the formation of 3-D SAR images of targets from sparse aperture data. The 3-D target grids capture the 3-D spatial and angular scattering properties of the target and serve as matched filters for SAR formation. The results of 3-D SAR formation using the backhoe public release data are presented.

  2. Rapid 360 degree imaging and stitching of 3D objects using multiple precision 3D cameras

    NASA Astrophysics Data System (ADS)

    Lu, Thomas; Yin, Stuart; Zhang, Jianzhong; Li, Jiangan; Wu, Frank

    2008-02-01

    In this paper, we present the system architecture of a 360 degree view 3D imaging system. The system consists of multiple 3D sensors synchronized to take 3D images around the object. Each 3D camera employs a single high-resolution digital camera and a color-coded light projector. The cameras are synchronized to rapidly capture the 3D and color information of a static object or a live person. The color encoded structure lighting ensures the precise reconstruction of the depth of the object. A 3D imaging system architecture is presented. The architecture employs the displacement of the camera and the projector to triangulate the depth information. The 3D camera system has achieved high depth resolution down to 0.1mm on a human head sized object and 360 degree imaging capability.

  3. 3D Buildings Extraction from Aerial Images

    NASA Astrophysics Data System (ADS)

    Melnikova, O.; Prandi, F.

    2011-09-01

    This paper introduces a semi-automatic method for buildings extraction through multiple-view aerial image analysis. The advantage of the used semi-automatic approach is that it allows processing of each building individually finding the parameters of buildings features extraction more precisely for each area. On the early stage the presented technique uses an extraction of line segments that is done only inside of areas specified manually. The rooftop hypothesis is used further to determine a subset of quadrangles, which could form building roofs from a set of extracted lines and corners obtained on the previous stage. After collecting of all potential roof shapes in all images overlaps, the epipolar geometry is applied to find matching between images. This allows to make an accurate selection of building roofs removing false-positive ones and to identify their global 3D coordinates given camera internal parameters and coordinates. The last step of the image matching is based on geometrical constraints in contrast to traditional correlation. The correlation is applied only in some highly restricted areas in order to find coordinates more precisely, in such a way significantly reducing processing time of the algorithm. The algorithm has been tested on a set of Milan's aerial images and shows highly accurate results.

  4. Automatic needle segmentation in 3D ultrasound images using 3D Hough transform

    NASA Astrophysics Data System (ADS)

    Zhou, Hua; Qiu, Wu; Ding, Mingyue; Zhang, Songgeng

    2007-12-01

    3D ultrasound (US) is a new technology that can be used for a variety of diagnostic applications, such as obstetrical, vascular, and urological imaging, and has been explored greatly potential in the applications of image-guided surgery and therapy. Uterine adenoma and uterine bleeding are the two most prevalent diseases in Chinese woman, and a minimally invasive ablation system using an RF button electrode which is needle-like is being used to destroy tumor cells or stop bleeding currently. Now a 3D US guidance system has been developed to avoid accidents or death of the patient by inaccurate localizations of the electrode and the tumor position during treatment. In this paper, we described two automated techniques, the 3D Hough Transform (3DHT) and the 3D Randomized Hough Transform (3DRHT), which is potentially fast, accurate, and robust to provide needle segmentation in 3D US image for use of 3D US imaging guidance. Based on the representation (Φ , θ , ρ , α ) of straight lines in 3D space, we used the 3DHT algorithm to segment needles successfully assumed that the approximate needle position and orientation are known in priori. The 3DRHT algorithm was developed to detect needles quickly without any information of the 3D US images. The needle segmentation techniques were evaluated using the 3D US images acquired by scanning water phantoms. The experiments demonstrated the feasibility of two 3D needle segmentation algorithms described in this paper.

  5. Photogrammetric 3D reconstruction using mobile imaging

    NASA Astrophysics Data System (ADS)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  6. Ames Lab 101: Real-Time 3D Imaging

    ScienceCinema

    Zhang, Song

    2012-08-29

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  7. Progress in 3D imaging and display by integral imaging

    NASA Astrophysics Data System (ADS)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  8. 3D integrated hybrid silicon laser.

    PubMed

    Song, Bowen; Stagarescu, Cristian; Ristic, Sasa; Behfar, Alex; Klamkin, Jonathan

    2016-05-16

    Lasers were realized on silicon by flip-chip bonding of indium phosphide (InP) devices containing total internal reflection turning mirrors for surface emission. Light is coupled to the silicon waveguides through surface grating couplers. With this technique, InP lasers were integrated on silicon. Laser cavities were also formed by coupling InP reflective semiconductor optical amplifiers to microring resonator filters and distributed Bragg reflector mirrors. Single-mode continuous wave lasing was demonstrated with a side mode suppression ratio of 30 dB. Up to 2 mW of optical power was coupled to the silicon waveguide. Thermal simulations were also performed to evaluate the low thermal impedance afforded by this architecture and potential for high wall-plug efficiency. PMID:27409867

  9. 3D Laser Scanning in Technology Education.

    ERIC Educational Resources Information Center

    Flowers, Jim

    2000-01-01

    A three-dimensional laser scanner can be used as a tool for design and problem solving in technology education. A hands-on experience can enhance learning by captivating students' interest and empowering them with creative tools. (Author/JOW)

  10. Advanced 3D imaging lidar concepts for long range sensing

    NASA Astrophysics Data System (ADS)

    Gordon, K. J.; Hiskett, P. A.; Lamb, R. A.

    2014-06-01

    Recent developments in 3D imaging lidar are presented. Long range 3D imaging using photon counting is now a possibility, offering a low-cost approach to integrated remote sensing with step changing advantages in size, weight and power compared to conventional analogue active imaging technology. We report results using a Geiger-mode array for time-of-flight, single photon counting lidar for depth profiling and determination of the shape and size of tree canopies and distributed surface reflections at a range of 9km, with 4μJ pulses with a frame rate of 100kHz using a low-cost fibre laser operating at a wavelength of λ=1.5 μm. The range resolution is less than 4cm providing very high depth resolution for target identification. This specification opens up several additional functionalities for advanced lidar, for example: absolute rangefinding and depth profiling for long range identification, optical communications, turbulence sensing and time-of-flight spectroscopy. Future concepts for 3D time-of-flight polarimetric and multispectral imaging lidar, with optical communications in a single integrated system are also proposed.

  11. Dedicated 3D photoacoustic breast imaging

    PubMed Central

    Kruger, Robert A.; Kuzmiak, Cherie M.; Lam, Richard B.; Reinecke, Daniel R.; Del Rio, Stephen P.; Steed, Doreen

    2013-01-01

    Purpose: To report the design and imaging methodology of a photoacoustic scanner dedicated to imaging hemoglobin distribution throughout a human breast. Methods: The authors developed a dedicated breast photoacoustic mammography (PAM) system using a spherical detector aperture based on our previous photoacoustic tomography scanner. The system uses 512 detectors with rectilinear scanning. The scan shape is a spiral pattern whose radius varies from 24 to 96 mm, thereby allowing a field of view that accommodates a wide range of breast sizes. The authors measured the contrast-to-noise ratio (CNR) using a target comprised of 1-mm dots printed on clear plastic. Each dot absorption coefficient was approximately the same as a 1-mm thickness of whole blood at 756 nm, the output wavelength of the Alexandrite laser used by this imaging system. The target was immersed in varying depths of an 8% solution of stock Liposyn II-20%, which mimics the attenuation of breast tissue (1.1 cm−1). The spatial resolution was measured using a 6 μm-diameter carbon fiber embedded in agar. The breasts of four healthy female volunteers, spanning a range of breast size from a brassiere C cup to a DD cup, were imaged using a 96-mm spiral protocol. Results: The CNR target was clearly visualized to a depth of 53 mm. Spatial resolution, which was estimated from the full width at half-maximum of a profile across the PAM image of a carbon fiber, was 0.42 mm. In the four human volunteers, the vasculature was well visualized throughout the breast tissue, including to the chest wall. Conclusions: CNR, lateral field-of-view and penetration depth of our dedicated PAM scanning system is sufficient to image breasts as large as 1335 mL, which should accommodate up to 90% of the women in the United States. PMID:24320471

  12. An omnidirectional 3D sensor with line laser scanning

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Gao, Bingtuan; Liu, Chuande; Wang, Peng; Gao, Shuanglei

    2016-09-01

    An active omnidirectional vision owns the advantages of the wide field of view (FOV) imaging, resulting in an entire 3D environment scene, which is promising in the field of robot navigation. However, the existing omnidirectional vision sensors based on line laser can measure points only located on the optical plane of the line laser beam, resulting in the low-resolution reconstruction. Whereas, to improve resolution, some other omnidirectional vision sensors with the capability of projecting 2D encode pattern from projector and curved mirror. However, the astigmatism property of curve mirror causes the low-accuracy reconstruction. To solve the above problems, a rotating polygon scanning mirror is used to scan the object in the vertical direction so that an entire profile of the observed scene can be obtained at high accuracy, without of astigmatism phenomenon. Then, the proposed method is calibrated by a conventional 2D checkerboard plate. The experimental results show that the measurement error of the 3D omnidirectional sensor is approximately 1 mm. Moreover, the reconstruction of objects with different shapes based on the developed sensor is also verified.

  13. Efficient workflows for 3D building full-color model reconstruction using LIDAR long-range laser and image-based modeling techniques

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong

    2004-12-01

    Two efficient workflow are developed for the reconstruction of a 3D full color building model. One uses a point wise sensing device to sample an unknown object densely and attach color textures from a digital camera separately. The other uses an image based approach to reconstruct the model with color texture automatically attached. The point wise sensing device reconstructs the CAD model using a modified best view algorithm that collects the maximum number of construction faces in one view. The partial views of the point clouds data are then glued together using a common face between two consecutive views. Typical overlapping mesh removal and coarsening procedures are adapted to generate a unified 3D mesh shell structure. A post processing step is then taken to combine the digital image content from a separate camera with the 3D mesh shell surfaces. An indirect uv mapping procedure first divide the model faces into groups within which every face share the same normal direction. The corresponding images of these faces in a group is then adjusted using the uv map as a guidance. The final assembled image is then glued back to the 3D mesh to present a full colored building model. The result is a virtual building that can reflect the true dimension and surface material conditions of a real world campus building. The image based modeling procedure uses a commercial photogrammetry package to reconstruct the 3D model. A novel view planning algorithm is developed to guide the photos taking procedure. This algorithm successfully generate a minimum set of view angles. The set of pictures taken at these view angles can guarantee that each model face shows up at least in two of the pictures set and no more than three. The 3D model can then be reconstructed with minimum amount of labor spent in correlating picture pairs. The finished model is compared with the original object in both the topological and dimensional aspects. All the test cases show exact same topology and

  14. Efficient workflows for 3D building full-color model reconstruction using LIDAR long-range laser and image-based modeling techniques

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong

    2005-01-01

    Two efficient workflow are developed for the reconstruction of a 3D full color building model. One uses a point wise sensing device to sample an unknown object densely and attach color textures from a digital camera separately. The other uses an image based approach to reconstruct the model with color texture automatically attached. The point wise sensing device reconstructs the CAD model using a modified best view algorithm that collects the maximum number of construction faces in one view. The partial views of the point clouds data are then glued together using a common face between two consecutive views. Typical overlapping mesh removal and coarsening procedures are adapted to generate a unified 3D mesh shell structure. A post processing step is then taken to combine the digital image content from a separate camera with the 3D mesh shell surfaces. An indirect uv mapping procedure first divide the model faces into groups within which every face share the same normal direction. The corresponding images of these faces in a group is then adjusted using the uv map as a guidance. The final assembled image is then glued back to the 3D mesh to present a full colored building model. The result is a virtual building that can reflect the true dimension and surface material conditions of a real world campus building. The image based modeling procedure uses a commercial photogrammetry package to reconstruct the 3D model. A novel view planning algorithm is developed to guide the photos taking procedure. This algorithm successfully generate a minimum set of view angles. The set of pictures taken at these view angles can guarantee that each model face shows up at least in two of the pictures set and no more than three. The 3D model can then be reconstructed with minimum amount of labor spent in correlating picture pairs. The finished model is compared with the original object in both the topological and dimensional aspects. All the test cases show exact same topology and

  15. Laser profiling of 3D microturbine blades

    NASA Astrophysics Data System (ADS)

    Holmes, Andrew S.; Heaton, Mark E.; Hong, Guodong; Pullen, Keith R.; Rumsby, Phil T.

    2003-11-01

    We have used KrF excimer laser ablation in the fabrication of a novel MEMS power conversion device based on an axial-flow turbine with an integral axial-flux electromagnetic generator. The device has a sandwich structure, comprising a pair of silicon stators either side of an SU8 polymer rotor. The curved turbine rotor blades were fabricated by projection ablation of SU8 parts performed by conventional UV lithography. A variable aperture mask, implemented by stepping a moving aperture in front of a fixed one, was used to achieve the desired spatial variation in the ablated depth. An automatic process was set up on a commercial laser workstation, with the laser firing and mask motion being controlled by computer. High quality SU8 rotor parts with diameters of 13 mm and depths of 1 mm were produced at a fluence of 0.7 J/cm2, corresponding to a material removal rate of approximately 0.3 μm per pulse. A similar approach was used to form SU8 guide vane inserts for the stators.

  16. Concurrent 3-D motion segmentation and 3-D interpretation of temporal sequences of monocular images.

    PubMed

    Sekkati, Hicham; Mitiche, Amar

    2006-03-01

    The purpose of this study is to investigate a variational method for joint multiregion three-dimensional (3-D) motion segmentation and 3-D interpretation of temporal sequences of monocular images. Interpretation consists of dense recovery of 3-D structure and motion from the image sequence spatiotemporal variations due to short-range image motion. The method is direct insomuch as it does not require prior computation of image motion. It allows movement of both viewing system and multiple independently moving objects. The problem is formulated following a variational statement with a functional containing three terms. One term measures the conformity of the interpretation within each region of 3-D motion segmentation to the image sequence spatiotemporal variations. The second term is of regularization of depth. The assumption that environmental objects are rigid accounts automatically for the regularity of 3-D motion within each region of segmentation. The third and last term is for the regularity of segmentation boundaries. Minimization of the functional follows the corresponding Euler-Lagrange equations. This results in iterated concurrent computation of 3-D motion segmentation by curve evolution, depth by gradient descent, and 3-D motion by least squares within each region of segmentation. Curve evolution is implemented via level sets for topology independence and numerical stability. This algorithm and its implementation are verified on synthetic and real image sequences. Viewers presented with anaglyphs of stereoscopic images constructed from the algorithm's output reported a strong perception of depth. PMID:16519351

  17. Non-destructive 3D Imaging of Extraterrestrial Materials by Synchrotron X-ray Micro- tomography (XR-CMT) and Laser Confocal Scanning Microscopy (LCSM): Beyond Pretty Pictures

    NASA Astrophysics Data System (ADS)

    Ebel, D. S.; Greenberg, M.

    2009-05-01

    We report scientific results made possible only by the use these two non-destructive 3D imaging techniques. XR-CMT provides 3D image reconstructions at spatial resolutions of 1 to 17 micron/voxel edge. We use XR- CMT to locate potential melt-inclusion-bearing phenocrysts in batches of 100-200 micron lunar fire-fountain spherules; to locate and visualize the morphology of 1-2mm size, irregular, unmelted Ca-, Al-rich inclusions (CAIs) and to quantify chondrule/matrix ratios and chondrule size distributions in 6x6x20mm chunks of carbonaceous chondrites; to quantify the modal abundance of opaque phases in similar sized Martian meteorite fragments, and in individual 1-2mm diameter chondrules from chondrites. LCSM provides 3D image stacks at resolutions < 100 nm/pixel. We are the only group creating deconvolved image stacks of 100 to over 1000 micron long comet particle tracks in aerogel keystones from the Stardust mission. We present measurements of track morphology in 3D, and locate high-value particles using complementary synchrotron x- ray fluorescence (XRF) examination. We show that bench-top LCSM extracts maximum information about tracks and particles rapidly and cheaply prior to destructive disassembly. Using XR-CMT we quantify, for the first time, the volumetric abundances of metal grains in 1-2 mm diameter CR chondrite chondrules. Metal abundances vary from 1 to 37 vol.% between 8 chondrules (and more by inspection), in a meteorite with solar (chondritic) Fe/Si ratio, indicating that chondrules formed and accreted locally from bulk solar composition material. They are 'complementary' to each other in Fe/Si ratios. Void spaces in chondritic CAIs and chondrules are shown to be a primary feature, not due to plucking during sectioning. CAI morphology in 3D reveals pre-accretionary impact features, and various types of mineralogical layering, seen in 3D, reveal the formation history of these building blocks of planets and asteroids. We also quantify the x

  18. 3D laser gated viewing from a moving submarine platform

    NASA Astrophysics Data System (ADS)

    Christnacher, F.; Laurenzis, M.; Monnin, D.; Schmitt, G.; Metzger, Nicolas; Schertzer, Stéphane; Scholtz, T.

    2014-10-01

    Range-gated active imaging is a prominent technique for night vision, remote sensing or vision through obstacles (fog, smoke, camouflage netting…). Furthermore, range-gated imaging not only informs on the scene reflectance but also on the range for each pixel. In this paper, we discuss 3D imaging methods for underwater imaging applications. In this situation, it is particularly difficult to stabilize the imaging platform and these 3D reconstruction algorithms suffer from the motion between the different images in the recorded sequence. To overcome this drawback, we investigated a new method based on a combination between image registration by homography and 3D scene reconstruction through tomography or two-image technique. After stabilisation, the 3D reconstruction is achieved by using the two upper-mentioned techniques. In the different experimental examples given in this paper, a centimetric resolution could be achieved.

  19. Fast 3D shape measurements using laser speckle projection

    NASA Astrophysics Data System (ADS)

    Schaffer, Martin; Grosse, Marcus; Harendt, Bastian; Kowarschik, Richard

    2011-05-01

    3D measurement setups based on structured light projection are widely used for many industrial applications. Due to intense research in the past the accuracy is comparably high in connection with relatively low cost of the equipment. But facing higher acquisition rates in industries especially for chain assembling lines there are still hurdles to take when accelerating 3D measurements and at the same time retaining accuracies. We developed a projection technique that uses laser speckles to enable fast 3D measurements with statistically structured light patterns. In combination with a temporal correlation technique dense and accurate 3D reconstructions at nearly video rate can be achieved.

  20. Automatic needle segmentation in 3D ultrasound images using 3D improved Hough transform

    NASA Astrophysics Data System (ADS)

    Zhou, Hua; Qiu, Wu; Ding, Mingyue; Zhang, Songgen

    2008-03-01

    3D ultrasound (US) is a new technology that can be used for a variety of diagnostic applications, such as obstetrical, vascular, and urological imaging, and has been explored greatly potential in the applications of image-guided surgery and therapy. Uterine adenoma and uterine bleeding are the two most prevalent diseases in Chinese woman, and a minimally invasive ablation system using a needle-like RF button electrode is widely used to destroy tumor cells or stop bleeding. To avoid accidents or death of the patient by inaccurate localizations of the electrode and the tumor position during treatment, 3D US guidance system was developed. In this paper, a new automated technique, the 3D Improved Hough Transform (3DIHT) algorithm, which is potentially fast, accurate, and robust to provide needle segmentation in 3D US image for use of 3D US imaging guidance, was presented. Based on the coarse-fine search strategy and a four parameter representation of lines in 3D space, 3DIHT algorithm can segment needles quickly, accurately and robustly. The technique was evaluated using the 3D US images acquired by scanning a water phantom. The segmentation position deviation of the line was less than 2mm and angular deviation was much less than 2°. The average computational time measured on a Pentium IV 2.80GHz PC computer with a 381×381×250 image was less than 2s.

  1. 3D range scan enhancement using image-based methods

    NASA Astrophysics Data System (ADS)

    Herbort, Steffen; Gerken, Britta; Schugk, Daniel; Wöhler, Christian

    2013-10-01

    This paper addresses the problem of 3D surface scan refinement, which is desirable due to noise, outliers, and missing measurements being present in the 3D surfaces obtained with a laser scanner. We present a novel algorithm for the fusion of absolute laser scanner depth profiles and photometrically estimated surface normal data, which yields a noise-reduced and highly detailed depth profile with large scale shape robustness. In contrast to other approaches published in the literature, the presented algorithm (1) regards non-Lambertian surfaces, (2) simultaneously computes surface reflectance (i.e. BRDF) parameters required for 3D reconstruction, (3) models pixelwise incident light and viewing directions, and (4) accounts for interreflections. The algorithm as such relies on the minimization of a three-component error term, which penalizes intensity deviations, integrability deviations, and deviations from the known large-scale surface shape. The solution of the error minimization is obtained iteratively based on a calculus of variations. BRDF parameters are estimated by initially reducing and then iteratively refining the optical resolution, which provides the required robust data basis. The 3D reconstruction of concave surface regions affected by interreflections is improved by compensating global illumination in the image data. The algorithm is evaluated based on eight objects with varying albedos and reflectance behaviors (diffuse, specular, metallic). The qualitative evaluation shows a removal of outliers and a strong reduction of noise, while the large scale shape is preserved. Fine surface details Which are previously not contained in the surface scans, are incorporated through using image data. The algorithm is evaluated with respect to its absolute accuracy using two caliper objects of known shape, and based on synthetically generated data. The beneficial effect of interreflection compensation on the reconstruction accuracy is evaluated quantitatively in a

  2. Body image, shape, and volumetric assessments using 3D whole body laser scanning and 2D digital photography in females with a diagnosed eating disorder: preliminary novel findings.

    PubMed

    Stewart, Arthur D; Klein, Susan; Young, Julie; Simpson, Susan; Lee, Amanda J; Harrild, Kirstin; Crockett, Philip; Benson, Philip J

    2012-05-01

    We piloted three-dimensional (3D) body scanning in eating disorder (ED) patients. Assessments of 22 ED patients (including nine anorexia nervosa (AN) patients, 12 bulimia nervosa (BN) patients, and one patient with eating disorder not otherwise specified) and 22 matched controls are presented. Volunteers underwent visual screening, two-dimensional (2D) digital photography to assess perception and dissatisfaction (via computerized image distortion), and adjunctive 3D full-body scanning. Patients and controls perceived themselves as bigger than their true shape (except in the chest region for controls and anorexia patients). All participants wished to be smaller across all body regions. Patients had poorer veridical perception and greater dissatisfaction than controls. Perception was generally poorer and dissatisfaction greater in bulimia compared with anorexia patients. 3D-volume:2D-area relationships showed that anorexia cases had least tissue on the torso and most on the arms and legs relative to frontal area. The engagement of patients with the scanning process suggests a validation study is viable. This would enable mental constructs of body image to be aligned with segmental volume of body areas, overcoming limitations, and errors associated with 2D instruments restricted to frontal (coronal) shapes. These novel data could inform the design of clinical trials in adjunctive treatments for eating disorders. PMID:22506746

  3. Laser Transfer of Metals and Metal Alloys for Digital Microfabrication of 3D Objects.

    PubMed

    Zenou, Michael; Sa'ar, Amir; Kotler, Zvi

    2015-09-01

    3D copper logos printed on epoxy glass laminates are demonstrated. The structures are printed using laser transfer of molten metal microdroplets. The example in the image shows letters of 50 µm width, with each letter being taller than the last, from a height of 40 µm ('s') to 190 µm ('l'). The scanning microscopy image is taken at a tilt, and the topographic image was taken using interferometric 3D microscopy, to show the effective control of this technique. PMID:25966320

  4. 3D spatial resolution and spectral resolution of interferometric 3D imaging spectrometry.

    PubMed

    Obara, Masaki; Yoshimori, Kyu

    2016-04-01

    Recently developed interferometric 3D imaging spectrometry (J. Opt. Soc. Am A18, 765 [2001]1084-7529JOAOD610.1364/JOSAA.18.000765) enables obtainment of the spectral information and 3D spatial information for incoherently illuminated or self-luminous object simultaneously. Using this method, we can obtain multispectral components of complex holograms, which correspond directly to the phase distribution of the wavefronts propagated from the polychromatic object. This paper focuses on the analysis of spectral resolution and 3D spatial resolution in interferometric 3D imaging spectrometry. Our analysis is based on a novel analytical impulse response function defined over four-dimensional space. We found that the experimental results agree well with the theoretical prediction. This work also suggests a new criterion and estimate method regarding 3D spatial resolution of digital holography. PMID:27139648

  5. Comparison of 3d Reconstruction Services and Terrestrial Laser Scanning for Cultural Heritage Documentation

    NASA Astrophysics Data System (ADS)

    Rasztovits, S.; Dorninger, P.

    2013-07-01

    Terrestrial Laser Scanning (TLS) is an established method to reconstruct the geometrical surface of given objects. Current systems allow for fast and efficient determination of 3D models with high accuracy and richness in detail. Alternatively, 3D reconstruction services are using images to reconstruct the surface of an object. While the instrumental expenses for laser scanning systems are high, upcoming free software services as well as open source software packages enable the generation of 3D models using digital consumer cameras. In addition, processing TLS data still requires an experienced user while recent web-services operate completely automatically. An indisputable advantage of image based 3D modeling is its implicit capability for model texturing. However, the achievable accuracy and resolution of the 3D models is lower than those of laser scanning data. Within this contribution, we investigate the results of automated web-services for image based 3D model generation with respect to a TLS reference model. For this, a copper sculpture was acquired using a laser scanner and using image series of different digital cameras. Two different webservices, namely Arc3D and AutoDesk 123D Catch were used to process the image data. The geometric accuracy was compared for the entire model and for some highly structured details. The results are presented and interpreted based on difference models. Finally, an economical comparison of the generation of the models is given considering the interactive and processing time costs.

  6. The fast and accurate 3D-face scanning technology based on laser triangle sensors

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Chen, Yang; Kong, Bin

    2013-08-01

    A laser triangle scanning method and the structure of 3D-face measurement system were introduced. In presented system, a liner laser source was selected as an optical indicated signal in order to scanning a line one times. The CCD image sensor was used to capture image of the laser line modulated by human face. The system parameters were obtained by system calibrated calculated. The lens parameters of image part of were calibrated with machine visual image method and the triangle structure parameters were calibrated with fine wire paralleled arranged. The CCD image part and line laser indicator were set with a linear motor carry which can achieve the line laser scanning form top of the head to neck. For the nose is ledge part and the eyes are sunk part, one CCD image sensor can not obtain the completed image of laser line. In this system, two CCD image sensors were set symmetric at two sides of the laser indicator. In fact, this structure includes two laser triangle measure units. Another novel design is there laser indicators were arranged in order to reduce the scanning time for it is difficult for human to keep static for longer time. The 3D data were calculated after scanning. And further data processing include 3D coordinate refine, mesh calculate and surface show. Experiments show that this system has simply structure, high scanning speed and accurate. The scanning range covers the whole head of adult, the typical resolution is 0.5mm.

  7. Automatic 2D-to-3D image conversion using 3D examples from the internet

    NASA Astrophysics Data System (ADS)

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  8. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    SciTech Connect

    Morimoto, A.K.; Bow, W.J.; Strong, D.S.

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  9. 3D Lasers Increase Efficiency, Safety of Moving Machines

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Canadian company Neptec Design Group Ltd. developed its Laser Camera System, used by shuttles to render 3D maps of their hulls for assessing potential damage. Using NASA funding, the firm incorporated LiDAR technology and created the TriDAR 3D sensor. Its commercial arm, Neptec Technologies Corp., has sold the technology to Orbital Sciences, which uses it to guide its Cygnus spacecraft during rendezvous and dock operations at the International Space Station.

  10. A 3D image analysis tool for SPECT imaging

    NASA Astrophysics Data System (ADS)

    Kontos, Despina; Wang, Qiang; Megalooikonomou, Vasileios; Maurer, Alan H.; Knight, Linda C.; Kantor, Steve; Fisher, Robert S.; Simonian, Hrair P.; Parkman, Henry P.

    2005-04-01

    We have developed semi-automated and fully-automated tools for the analysis of 3D single-photon emission computed tomography (SPECT) images. The focus is on the efficient boundary delineation of complex 3D structures that enables accurate measurement of their structural and physiologic properties. We employ intensity based thresholding algorithms for interactive and semi-automated analysis. We also explore fuzzy-connectedness concepts for fully automating the segmentation process. We apply the proposed tools to SPECT image data capturing variation of gastric accommodation and emptying. These image analysis tools were developed within the framework of a noninvasive scintigraphic test to measure simultaneously both gastric emptying and gastric volume after ingestion of a solid or a liquid meal. The clinical focus of the particular analysis was to probe associations between gastric accommodation/emptying and functional dyspepsia. Employing the proposed tools, we outline effectively the complex three dimensional gastric boundaries shown in the 3D SPECT images. We also perform accurate volume calculations in order to quantitatively assess the gastric mass variation. This analysis was performed both with the semi-automated and fully-automated tools. The results were validated against manual segmentation performed by a human expert. We believe that the development of an automated segmentation tool for SPECT imaging of the gastric volume variability will allow for other new applications of SPECT imaging where there is a need to evaluate complex organ function or tumor masses.

  11. 3D Imaging with Holographic Tomography

    NASA Astrophysics Data System (ADS)

    Sheppard, Colin J. R.; Kou, Shan Shan

    2010-04-01

    There are two main types of tomography that enable the 3D internal structures of objects to be reconstructed from scattered data. The commonly known computerized tomography (CT) give good results in the x-ray wavelength range where the filtered back-projection theorem and Radon transform can be used. These techniques rely on the Fourier projection-slice theorem where rays are considered to propagate straight through the object. Another type of tomography called `diffraction tomography' applies in applications in optics and acoustics where diffraction and scattering effects must be taken into account. The latter proves to be a more difficult problem, as light no longer travels straight through the sample. Holographic tomography is a popular way of performing diffraction tomography and there has been active experimental research on reconstructing complex refractive index data using this approach recently. However, there are two distinct ways of doing tomography: either by rotation of the object or by rotation of the illumination while fixing the detector. The difference between these two setups is intuitive but needs to be quantified. From Fourier optics and information transformation point of view, we use 3D transfer function analysis to quantitatively describe how spatial frequencies of the object are mapped to the Fourier domain. We first employ a paraxial treatment by calculating the Fourier transform of the defocused OTF. The shape of the calculated 3D CTF for tomography, by scanning the illumination in one direction only, takes on a form that we might call a 'peanut,' compared to the case of object rotation, where a diablo is formed, the peanut exhibiting significant differences and non-isotropy. In particular, there is a line singularity along one transverse direction. Under high numerical aperture conditions, the paraxial treatment is not accurate, and so we make use of 3D analytical geometry to calculate the behaviour in the non-paraxial case. This time, we

  12. Simulating imaging spectrometer data of a mixed old-growth forest: A parameterization of a 3D radiative transfer model based on airborne and terrestrial laser scanning

    NASA Astrophysics Data System (ADS)

    Schneider, F. D.; Leiterer, R.; Morsdorf, F.; Gastellu-Etchegorry, J.; Lauret, N.; Pfeifer, N.; Schaepman, M. E.

    2013-12-01

    Remote sensing offers unique potential to study forest ecosystems by providing spatially and temporally distributed information that can be linked with key biophysical and biochemical variables. The estimation of biochemical constituents of leaves from remotely sensed data is of high interest revealing insight on photosynthetic processes, plant health, plant functional types, and speciation. However, the scaling of observations at the canopy level to the leaf level or vice versa is not trivial due to the structural complexity of forests. Thus, a common solution for scaling spectral information is the use of physically-based radiative transfer models. The discrete anisotropic radiative transfer model (DART), being one of the most complete coupled canopy-atmosphere 3D radiative transfer models, was parameterized based on airborne and in-situ measurements. At-sensor radiances were simulated and compared with measurements from an airborne imaging spectrometer. The study was performed on the Laegern site, a temperate mixed forest characterized by steep slopes, a heterogeneous spectral background, and deciduous and coniferous trees at different development stages (dominated by beech trees; 47°28'42.0' N, 8°21'51.8' E, 682 m asl, Switzerland). It is one of the few studies conducted on an old-growth forest. Particularly the 3D modeling of the complex canopy architecture is crucial to model the interaction of photons with the vegetation canopy and its background. Thus, we developed two forest reconstruction approaches: 1) based on a voxel grid, and 2) based on individual tree detection. Both methods are transferable to various forest ecosystems and applicable at scales between plot and landscape. Our results show that the newly developed voxel grid approach is favorable over a parameterization based on individual trees. In comparison to the actual imaging spectrometer data, the simulated images exhibit very similar spatial patterns, whereas absolute radiance values are

  13. Ultra-realistic 3-D imaging based on colour holography

    NASA Astrophysics Data System (ADS)

    Bjelkhagen, H. I.

    2013-02-01

    A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-D images. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-D images with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.

  14. Light field display and 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  15. 3D Imaging with Structured Illumination for Advanced Security Applications

    SciTech Connect

    Birch, Gabriel Carisle; Dagel, Amber Lynn; Kast, Brian A.; Smith, Collin S.

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  16. 3-D laser patterning process utilizing horizontal and vertical patterning

    DOEpatents

    Malba, Vincent; Bernhardt, Anthony F.

    2000-01-01

    A process which vastly improves the 3-D patterning capability of laser pantography (computer controlled laser direct-write patterning). The process uses commercially available electrodeposited photoresist (EDPR) to pattern 3-D surfaces. The EDPR covers the surface of a metal layer conformally, coating the vertical as well as horizontal surfaces. A laser pantograph then patterns the EDPR, which is subsequently developed in a standard, commercially available developer, leaving patterned trench areas in the EDPR. The metal layer thereunder is now exposed in the trench areas and masked in others, and thereafter can be etched to form the desired pattern (subtractive process), or can be plated with metal (additive process), followed by a resist stripping, and removal of the remaining field metal (additive process). This improved laser pantograph process is simpler, faster, move manufacturable, and requires no micro-machining.

  17. Volumetric image display for complex 3D data visualization

    NASA Astrophysics Data System (ADS)

    Tsao, Che-Chih; Chen, Jyh Shing

    2000-05-01

    A volumetric image display is a new display technology capable of displaying computer generated 3D images in a volumetric space. Many viewers can walk around the display and see the image from omni-directions simultaneously without wearing any glasses. The image is real and possesses all major elements in both physiological and psychological depth cues. Due to the volumetric nature of its image, the VID can provide the most natural human-machine interface in operations involving 3D data manipulation and 3D targets monitoring. The technology creates volumetric 3D images by projecting a series of profiling images distributed in the space form a volumetric image because of the after-image effect of human eyes. Exemplary applications in biomedical image visualization were tested on a prototype display, using different methods to display a data set from Ct-scans. The features of this display technology make it most suitable for applications that require quick understanding of the 3D relations, need frequent spatial interactions with the 3D images, or involve time-varying 3D data. It can also be useful for group discussion and decision making.

  18. The 3D laser radar vision processor system

    NASA Technical Reports Server (NTRS)

    Sebok, T. M.

    1990-01-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  19. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  20. 3D photoacoustic imaging of a moving target

    NASA Astrophysics Data System (ADS)

    Ephrat, Pinhas; Roumeliotis, Michael; Prato, Frank S.; Carson, Jeffrey J. L.

    2009-02-01

    We have developed a fast 3D photoacoustic imaging system based on a sparse array of ultrasound detectors and iterative image reconstruction. To investigate the high frame rate capabilities of our system in the context of rotational motion, flow, and spectroscopy, we performed high frame-rate imaging on a series of targets, including a rotating graphite rod, a bolus of methylene blue flowing through a tube, and hyper-spectral imaging of a tube filled with methylene blue under a no flow condition. Our frame-rate for image acquisition was 10 Hz, which was limited by the laser repetition rate. We were able to track the rotation of the rod and accurately estimate its rotational velocity, at a rate of 0.33 rotations-per-second. The flow of contrast in the tube, at a flow rate of 180 μL/min, was also well depicted, and quantitative analysis suggested a potential method for estimating flow velocity from such measurements. The spectrum obtained did not provide accurate results, but depicted the spectral absorption signature of methylene blue , which may be sufficient for identification purposes. These preliminary results suggest that our high frame-rate photoacoustic imaging system could be used for identifying contrast agents and monitoring kinetics as an agent propagates through specific, simple structures such as blood vessels.

  1. 3D augmented reality with integral imaging display

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  2. High resolution 3D imaging of synchrotron generated microbeams

    SciTech Connect

    Gagliardi, Frank M.; Cornelius, Iwan; Blencowe, Anton; Franich, Rick D.; Geso, Moshi

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  3. Precision Control Module For UV Laser 3D Micromachining

    NASA Astrophysics Data System (ADS)

    Wu, Wen-Hong; Hung, Min-Wei; Chang, Chun-Li

    2011-01-01

    UV laser has been widely used in various micromachining such as micro-scribing or patterning processing. At present, most of the semiconductors, LEDs, photovoltaic solar panels and touch panels industries need the UV laser processing system. However, most of the UV laser processing applications in the industries utilize two dimensional (2D) plane processing. And there are tremendous business opportunities that can be developed, such as three dimensional (3D) structures of micro-electromechanical (MEMS) sensor or the precision depth control of indium tin oxide (ITO) thin films edge insulation in touch panels. This research aims to develop a UV laser 3D micromachining module that can create the novel applications for industries. By special designed beam expender in optical system, the focal point of UV laser can be adjusted quickly and accurately through the optical path control lens of laser beam expender optical system. Furthermore, the integrated software for galvanometric scanner and focal point adjustment mechanism is developed as well, so as to carry out the precise 3D microstructure machining.

  4. 3D reconstruction with two webcams and a laser line projector

    NASA Astrophysics Data System (ADS)

    Li, Dongdong; Hui, Bingwei; Qiu, Shaohua; Wen, Gongjian

    2014-09-01

    Three-dimensional (3D) reconstruction is one of the most attractive research topics in photogrammetry and computer vision. Nowadays 3D reconstruction with simple and consumable equipment plays an important role. In this paper, a 3D reconstruction desktop system is built based on binocular stereo vision using a laser scanner. The hardware requirements are a simple commercial hand-held laser line projector and two common webcams for image acquisition. Generally, 3D reconstruction based on passive triangulation methods requires point correspondences among various viewpoints. The development of matching algorithms remains a challenging task in computer vision. In our proposal, with the help of a laser line projector, stereo correspondences are established robustly from epipolar geometry and the laser shadow on the scanned object. To establish correspondences more conveniently, epipolar rectification is employed using Bouguet's method after stereo calibration with a printed chessboard. 3D coordinates of the observed points are worked out with rayray triangulation and reconstruction outliers are removed with the planarity constraint of the laser plane. Dense 3D point clouds are derived from multiple scans under different orientations. Each point cloud is derived by sweeping the laser plane across the object requiring 3D reconstruction. The Iterative Closest Point algorithm is employed to register the derived point clouds. Rigid body transformation between neighboring scans is obtained to get the complete 3D point cloud. Finally polygon meshes are reconstructed from the derived point cloud and color images are used in texture mapping to get a lifelike 3D model. Experiments show that our reconstruction method is simple and efficient.

  5. 3D imaging of neutron tracks using confocal microscopy

    NASA Astrophysics Data System (ADS)

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  6. 3D model-based still image object categorization

    NASA Astrophysics Data System (ADS)

    Petre, Raluca-Diana; Zaharia, Titus

    2011-09-01

    This paper proposes a novel recognition scheme algorithm for semantic labeling of 2D object present in still images. The principle consists of matching unknown 2D objects with categorized 3D models in order to infer the semantics of the 3D object to the image. We tested our new recognition framework by using the MPEG-7 and Princeton 3D model databases in order to label unknown images randomly selected from the web. Results obtained show promising performances, with recognition rate up to 84%, which opens interesting perspectives in terms of semantic metadata extraction from still images/videos.

  7. 3D imaging using projected dynamic fringes

    NASA Astrophysics Data System (ADS)

    Shaw, Michael M.; Atkinson, John T.; Harvey, David M.; Hobson, Clifford A.; Lalor, Michael J.

    1994-12-01

    An instrument capable of highly accurate, non-contact range measurement has been developed, which is based upon the principle of projected rotating fringes. More usually known as dynamic fringe projection, it is this technique which is exploited in the dynamic automated range transducer (DART). The intensity waveform seen at the target and sensed by the detector, contains all the information required to accurately determine the fringe order. This, in turn, allows the range to be evaluated by the substitution of the fringe order into a simple algebraic expression. Various techniques for the analysis of the received intensity signals from the surface of the target have been investigated. The accuracy to which the range can be determined ultimately depends upon the accuracy to which the fringe order can be evaluated from the received intensity waveform. It is extremely important to be able to closely determine the fractional fringe order value, to achieve any meaningful results. This paper describes a number of techniques which have been used to analyze the intensity waveform, and critically appraises their suitability in terms of accuracy and required speed of operation. This work also examines the development of this instrument for three-dimensional measurements based on single or two beam systems. Using CCD array detectors, a 3-D range map of the object's surface may be produced.

  8. Imaging hypoxia using 3D photoacoustic spectroscopy

    NASA Astrophysics Data System (ADS)

    Stantz, Keith M.

    2010-02-01

    Purpose: The objective is to develop a multivariate in vivo hemodynamic model of tissue oxygenation (MiHMO2) based on 3D photoacoustic spectroscopy. Introduction: Low oxygen levels, or hypoxia, deprives cancer cells of oxygen and confers resistance to irradiation, some chemotherapeutic drugs, and oxygen-dependent therapies (phototherapy) leading to treatment failure and poor disease-free and overall survival. For example, clinical studies of patients with breast carcinomas, cervical cancer, and head and neck carcinomas (HNC) are more likely to suffer local reoccurrence and metastasis if their tumors are hypoxic. A novel method to non invasively measure tumor hypoxia, identify its type, and monitor its heterogeneity is devised by measuring tumor hemodynamics, MiHMO2. Material and Methods: Simulations are performed to compare tumor pO2 levels and hypoxia based on physiology - perfusion, fractional plasma volume, fractional cellular volume - and its hemoglobin status - oxygen saturation and hemoglobin concentration - based on in vivo measurements of breast, prostate, and ovarian tumors. Simulations of MiHMO2 are performed to assess the influence of scanner resolutions and different mathematic models of oxygen delivery. Results: Sensitivity of pO2 and hypoxic fraction to photoacoustic scanner resolution and dependencies on model complexity will be presented using hemodynamic parameters for different tumors. Conclusions: Photoacoustic CT spectroscopy provides a unique ability to monitor hemodynamic and cellular physiology in tissue, which can be used to longitudinally monitor tumor oxygenation and its response to anti-angiogenic therapies.

  9. Highway 3D model from image and lidar data

    NASA Astrophysics Data System (ADS)

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  10. Compression of 3D integral images using wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Mazri, Meriem; Aggoun, Amar

    2003-06-01

    This paper presents a wavelet-based lossy compression technique for unidirectional 3D integral images (UII). The method requires the extraction of different viewpoint images from the integral image. A single viewpoint image is constructed by extracting one pixel from each microlens, then each viewpoint image is decomposed using a Two Dimensional Discrete Wavelet Transform (2D-DWT). The resulting array of coefficients contains several frequency bands. The lower frequency bands of the viewpoint images are assembled and compressed using a 3 Dimensional Discrete Cosine Transform (3D-DCT) followed by Huffman coding. This will achieve decorrelation within and between 2D low frequency bands from the different viewpoint images. The remaining higher frequency bands are Arithmetic coded. After decoding and decompression of the viewpoint images using an inverse 3D-DCT and an inverse 2D-DWT, each pixel from every reconstructed viewpoint image is put back into its original position within the microlens to reconstruct the whole 3D integral image. Simulations were performed on a set of four different grey level 3D UII using a uniform scalar quantizer with deadzone. The results for the average of the four UII intensity distributions are presented and compared with previous use of 3D-DCT scheme. It was found that the algorithm achieves better rate-distortion performance, with respect to compression ratio and image quality at very low bit rates.

  11. Diffractive optical element for creating visual 3D images.

    PubMed

    Goncharsky, Alexander; Goncharsky, Anton; Durlevich, Svyatoslav

    2016-05-01

    A method is proposed to compute and synthesize the microrelief of a diffractive optical element to produce a new visual security feature - the vertical 3D/3D switch effect. The security feature consists in the alternation of two 3D color images when the diffractive element is tilted up/down. Optical security elements that produce the new security feature are synthesized using electron-beam technology. Sample optical security elements are manufactured that produce 3D to 3D visual switch effect when illuminated by white light. Photos and video records of the vertical 3D/3D switch effect of real optical elements are presented. The optical elements developed can be replicated using standard equipment employed for manufacturing security holograms. The new optical security feature is easy to control visually, safely protected against counterfeit, and designed to protect banknotes, documents, ID cards, etc. PMID:27137530

  12. 3D scene reconstruction from multi-aperture images

    NASA Astrophysics Data System (ADS)

    Mao, Miao; Qin, Kaihuai

    2014-04-01

    With the development of virtual reality, there is a growing demand for 3D modeling of real scenes. This paper proposes a novel 3D scene reconstruction framework based on multi-aperture images. Our framework consists of four parts. Firstly, images with different apertures are captured via programmable aperture. Secondly, we use SIFT method for feature point matching. Then we exploit binocular stereo vision to calculate camera parameters and 3D positions of matching points, forming a sparse 3D scene model. Finally, we apply patch-based multi-view stereo to obtain a dense 3D scene model. Experimental results show that our method is practical and effective to reconstruct dense 3D scene.

  13. 3-D seismic imaging of complex geologies

    SciTech Connect

    Womble, D.E.; Dosanjh, S.S.; VanDyke, J.P.; Oldfield, R.A.; Greenberg, D.S.

    1995-02-01

    We present three codes for the Intel Paragon that address the problem of three-dimensional seismic imaging of complex geologies. The first code models acoustic wave propagation and can be used to generate data sets to calibrate and validate seismic imaging codes. This code reported the fastest timings for acoustic wave propagation codes at a recent SEG (Society of Exploration Geophysicists) meeting. The second code implements a Kirchhoff method for pre-stack depth migration. Development of this code is almost complete, and preliminary results are presented. The third code implements a wave equation approach to seismic migration and is a Paragon implementation of a code from the ARCO Seismic Benchmark Suite.

  14. 3-D capacitance density imaging system

    DOEpatents

    Fasching, G.E.

    1988-03-18

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

  15. Polarimetric 3D integral imaging in photon-starved conditions.

    PubMed

    Carnicer, Artur; Javidi, Bahram

    2015-03-01

    We develop a method for obtaining 3D polarimetric integral images from elemental images recorded in low light illumination conditions. Since photon-counting images are very sparse, calculation of the Stokes parameters and the degree of polarization should be handled carefully. In our approach, polarimetric 3D integral images are generated using the Maximum Likelihood Estimation and subsequently reconstructed by means of a Total Variation Denoising filter. In this way, polarimetric results are comparable to those obtained in conventional illumination conditions. We also show that polarimetric information retrieved from photon starved images can be used in 3D object recognition problems. To the best of our knowledge, this is the first report on 3D polarimetric photon counting integral imaging. PMID:25836861

  16. Image performance evaluation of a 3D surgical imaging platform

    NASA Astrophysics Data System (ADS)

    Petrov, Ivailo E.; Nikolov, Hristo N.; Holdsworth, David W.; Drangova, Maria

    2011-03-01

    The O-arm (Medtronic Inc.) is a multi-dimensional surgical imaging platform. The purpose of this study was to perform a quantitative evaluation of the imaging performance of the O-arm in an effort to understand its potential for future nonorthopedic applications. Performance of the reconstructed 3D images was evaluated, using a custom-built phantom, in terms of resolution, linearity, uniformity and geometrical accuracy. Both the standard (SD, 13 s) and high definition (HD, 26 s) modes were evaluated, with the imaging parameters set to image the head (120 kVp, 100 mAs and 150 mAs, respectively). For quantitative noise characterization, the images were converted to Hounsfield units (HU) off-line. Measurement of the modulation transfer function revealed a limiting resolution (at 10% level) of 1.0 mm-1 in the axial dimension. Image noise varied between 15 and 19 HU for the HD and SD modes, respectively. Image intensities varied linearly over the measured range, up to 1300 HU. Geometric accuracy was maintained in all three dimensions over the field of view. The present study has evaluated the performance characteristics of the O-arm, and demonstrates feasibility for use in interventional applications and quantitative imaging tasks outside those currently targeted by the manufacturer. Further improvements to the reconstruction algorithms may further enhance performance for lower-contrast applications.

  17. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  18. The application of camera calibration in range-gated 3D imaging technology

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-quan; Wang, Xian-wei; Zhou, Yan

    2013-09-01

    Range-gated laser imaging technology was proposed in 1966 by LF Gillespiethe in U.S. Army Night Vision Laboratory(NVL). Using pulse laser and intensified charge-coupled device(ICCD) as light source and detector respectively, range-gated laser imaging technology can realize space-slice imaging while restraining the atmospheric backs-catter, and in turn detect the target effectively, by controlling the delay between the laser pulse and strobe. Owing to the constraints of the development of key components such as narrow pulse laser and gated imaging devices, the research has been progressed slowly in the next few decades. Until the beginning of this century, as the hardware technology continues to mature, this technology has developed rapidly in fields such as night vision, underwater imaging, biomedical imaging, three-dimensional imaging, especially range-gated three-dimensional(3-D) laser imaging field purposing of access to target spatial information. 3-D reconstruction is the processing of restoration of 3-D objects visible surface geometric structure from three-dimensional(2-D) image. Range-gated laser imaging technology can achieve gated imaging of slice space to form a slice image, and in turn provide the distance information corresponding to the slice image. But to inverse the information of 3-D space, we need to obtain the imaging visual field of system, that is, the focal length of the system. Then based on the distance information of the space slice, the spatial information of each unit space corresponding to each pixel can be inversed. Camera calibration is an indispensable step in 3-D reconstruction, including analysis of the internal structure of camera parameters and the external parameters . In order to meet the technical requirements of the range-gated 3-D imaging, this paper intends to study the calibration of the zoom lens system. After summarizing the camera calibration technique comprehensively, a classic calibration method based on line is

  19. A 3D Level Set Method for Microwave Breast Imaging

    PubMed Central

    Colgan, Timothy J.; Hagness, Susan C.; Van Veen, Barry D.

    2015-01-01

    Objective Conventional inverse-scattering algorithms for microwave breast imaging result in moderate resolution images with blurred boundaries between tissues. Recent 2D numerical microwave imaging studies demonstrate that the use of a level set method preserves dielectric boundaries, resulting in a more accurate, higher resolution reconstruction of the dielectric properties distribution. Previously proposed level set algorithms are computationally expensive and thus impractical in 3D. In this paper we present a computationally tractable 3D microwave imaging algorithm based on level sets. Methods We reduce the computational cost of the level set method using a Jacobian matrix, rather than an adjoint method, to calculate Frechet derivatives. We demonstrate the feasibility of 3D imaging using simulated array measurements from 3D numerical breast phantoms. We evaluate performance by comparing full 3D reconstructions to those from a conventional microwave imaging technique. We also quantitatively assess the efficacy of our algorithm in evaluating breast density. Results Our reconstructions of 3D numerical breast phantoms improve upon those of a conventional microwave imaging technique. The density estimates from our level set algorithm are more accurate than those of conventional microwave imaging, and the accuracy is greater than that reported for mammographic density estimation. Conclusion Our level set method leads to a feasible level of computational complexity for full 3D imaging, and reconstructs the heterogeneous dielectric properties distribution of the breast more accurately than conventional microwave imaging methods. Significance 3D microwave breast imaging using a level set method is a promising low-cost, non-ionizing alternative to current breast imaging techniques. PMID:26011863

  20. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    SciTech Connect

    Dibildox, Gerardo Baka, Nora; Walsum, Theo van; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  1. Critical comparison of 3D imaging approaches

    SciTech Connect

    Bennett, C L

    1999-06-03

    Currently three imaging spectrometer architectures, tunable filter, dispersive, and Fourier transform, are viable for imaging the universe in three dimensions. There are domains of greatest utility for each of these architectures. The optimum choice among the various alternative architectures is dependent on the nature of the desired observations, the maturity of the relevant technology, and the character of the backgrounds. The domain appropriate for each of the alternatives is delineated; both for instruments having ideal performance as well as for instrumentation based on currently available technology. The environment and science objectives for the Next Generation Space Telescope will be used as a specific representative case to provide a basis for comparison of the various alternatives.

  2. 3-D Imaging Based, Radiobiological Dosimetry

    PubMed Central

    Sgouros, George; Frey, Eric; Wahl, Richard; He, Bin; Prideaux, Andrew; Hobbs, Robert

    2008-01-01

    Targeted radionuclide therapy holds promise as a new treatment against cancer. Advances in imaging are making it possible to evaluate the spatial distribution of radioactivity in tumors and normal organs over time. Matched anatomical imaging such as combined SPECT/CT and PET/CT have also made it possible to obtain tissue density information in conjunction with the radioactivity distribution. Coupled with sophisticated iterative reconstruction algorithims, these advances have made it possible to perform highly patient-specific dosimetry that also incorporates radiobiological modeling. Such sophisticated dosimetry techniques are still in the research investigation phase. Given the attendant logistical and financial costs, a demonstrated improvement in patient care will be a prerequisite for the adoption of such highly-patient specific internal dosimetry methods. PMID:18662554

  3. Acoustic 3D imaging of dental structures

    SciTech Connect

    Lewis, D.K.; Hume, W.R.; Douglass, G.D.

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  4. Comparison of quasi-3D and full-3D laser wakefield PIC simulations using azimuthal mode decomposition

    NASA Astrophysics Data System (ADS)

    Dalichaouch, Thamine; Yu, Peicheng; Davidson, Asher; Mori, Warren; Vieira, Jorge; Fonseca, Ricardo

    2015-11-01

    Laser wakefield acceleration (LWFA) has attracted a lot of interest as a possible compact particle accelerator. However, 3D simulations of plasma-based accelerators are computationally intensive, sometimes taking millions of core hours on today's computers. A quasi-3D particle-In-cell (PIC) approach has been developed to take advantage of azimuthal symmetry in LWFA (and PWFA) simulations by using a particle-in-cell description in r-z and a Fourier description in φ. Quasi-3D simulations of LWFA are computationally more efficient and faster than Full-3D simulations because only first few azimuthal harmonics are needed to capture the physics of the problem. We have developed a cylindrical mode decomposition diagnostic for 3D Cartesian geometry simulations to analyze the agreement between full-3D and quasi-3D PIC simulations of laser and beam-plasma interactions. The diagnostic interpolates field data from Full-3D PIC simulations onto an irregular cylindrical grid (r , φ , z). A Fourier decomposition is then performed on the interpolated 3D simulation data along the azimuthal direction. This diagnostic has the added advantage of separating out the wakefields from the laser field. Preliminary results for this diagnostic of LWFA and PWFA simulations with symmetric and nearly symmetric spot sizes as well as of laser-plasma interactions using lasers with orbital angular momentum (higher order Laguerre-Gaussian modes) will be presented.

  5. MR image denoising method for brain surface 3D modeling

    NASA Astrophysics Data System (ADS)

    Zhao, De-xin; Liu, Peng-jie; Zhang, De-gan

    2014-11-01

    Three-dimensional (3D) modeling of medical images is a critical part of surgical simulation. In this paper, we focus on the magnetic resonance (MR) images denoising for brain modeling reconstruction, and exploit a practical solution. We attempt to remove the noise existing in the MR imaging signal and preserve the image characteristics. A wavelet-based adaptive curve shrinkage function is presented in spherical coordinates system. The comparative experiments show that the denoising method can preserve better image details and enhance the coefficients of contours. Using these denoised images, the brain 3D visualization is given through surface triangle mesh model, which demonstrates the effectiveness of the proposed method.

  6. Fiber optic coherent laser radar 3D vision system

    SciTech Connect

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R.; Wagner, K.; Weaver, S.; Xu, Jieping

    1996-12-31

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution.

  7. 3D-spectral domain computational imaging

    NASA Astrophysics Data System (ADS)

    Anderson, Trevor; Segref, Armin; Frisken, Grant; Ferra, Herman; Lorenser, Dirk; Frisken, Steven

    2016-03-01

    We present a proof-of-concept experiment utilizing a novel "snap-shot" spectral domain OCT technique that captures a phase coherent volume in a single frame. The sample is illuminated with a collimated beam of 75 μm diameter and the back-reflected light is analyzed by a 2-D matrix of spectral interferograms. A key challenge that is addressed is simultaneously maintaining lateral and spectral phase coherence over the imaged volume in the presence of sample motion. Digital focusing is demonstrated for 5.0 μm lateral resolution over an 800 μm axial range.

  8. Morphometrics, 3D Imaging, and Craniofacial Development.

    PubMed

    Hallgrimsson, Benedikt; Percival, Christopher J; Green, Rebecca; Young, Nathan M; Mio, Washington; Marcucio, Ralph

    2015-01-01

    Recent studies have shown how volumetric imaging and morphometrics can add significantly to our understanding of morphogenesis, the developmental basis for variation, and the etiology of structural birth defects. On the other hand, the complex questions and diverse imaging data in developmental biology present morphometrics with more complex challenges than applications in virtually any other field. Meeting these challenges is necessary in order to understand the mechanistic basis for variation in complex morphologies. This chapter reviews the methods and theory that enable the application of modern landmark-based morphometrics to developmental biology and craniofacial development, in particular. We discuss the theoretical foundations of morphometrics as applied to development and review the basic approaches to the quantification of morphology. Focusing on geometric morphometrics, we discuss the principal statistical methods for quantifying and comparing morphological variation and covariation structure within and among groups. Finally, we discuss the future directions for morphometrics in developmental biology that will be required for approaches that enable quantitative integration across the genotype-phenotype map. PMID:26589938

  9. 3D quantitative phase imaging of neural networks using WDT

    NASA Astrophysics Data System (ADS)

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  10. Confocal scanning laser microscopy with complementary 3D image analysis allows quantitative studies of functional state of ionoregulatory cells in the Nile tilapia (Oreochromis niloticus) following salinity challenge.

    PubMed

    Fridman, Sophie; Rana, Krishen J; Bron, James E

    2013-04-01

    The development of a novel three-dimensional image analysis technique of stacks generated by confocal laser scanning microscopy is described allowing visualization of mitochondria-rich cells (MRCs) in the seawater-adapted Nile tilapia in relation to their spatial location. This method permits the assessment and classification of both active and nonactive MRCs based on the distance of the top of the immunopositive cell from the epithelial surface. In addition, this technique offers the potential for informative and quantitative studies, for example, densitometric and morphometric measurements based on MRC functional state. Confocal scanning laser microscopy used with triple staining whole-mount immunohistochemistry was used to detect integumental MRCs in the yolk-sac larvae tail of the Nile tilapia following transfer from freshwater to elevated salinities, that is, 12.5 and 20 ppt. Mean active MRC volume was always significantly larger and displayed a greater staining intensity (GLM; P<0.05) than nonactive MRCs. Following transfer, the percentage of active MRCs was seen to increase as did MRC volume (GLM; P<0.05). PMID:23390074

  11. Accommodation response measurements for integral 3D image

    NASA Astrophysics Data System (ADS)

    Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

    2014-03-01

    We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

  12. 3-D laser radar simulation for autonomous spacecraft landing

    NASA Technical Reports Server (NTRS)

    Reiley, Michael F.; Carmer, Dwayne C.; Pont, W. F.

    1991-01-01

    A sophisticated 3D laser radar sensor simulation, developed and applied to the task of autonomous hazard detection and avoidance, is presented. This simulation includes a backward ray trace to sensor subpixels, incoherent subpixel integration, range dependent noise, sensor point spread function effects, digitization noise, and AM-CW modulation. Specific sensor parameters, spacecraft lander trajectory, and terrain type have been selected to generate simulated sensor data.

  13. Towards 3-D laser nano patterning in polymer optical materials

    NASA Astrophysics Data System (ADS)

    Scully, Patricia J.; Perrie, Walter

    2015-03-01

    Progress towards 3-D subsurface structuring of polymers using femtosecond lasers is presented. Highly localised refractive index changes can be generated deep in transparent optical polymers without pre doping for photosensitisation or post processing by annealing. Understanding the writing conditions surpasses the limitations of materials, dimensions and chemistry, to facilitate unique structures entirely formed by laser-polymeric interactions to overcome materials, dimensional, refractive index and wavelength constraints.. Numerical aperture, fluence, temporal pulselength, wavelength and incident polarisation are important parameters to be considered, in achieving the desired inscription. Non-linear aspects of multiphoton absorption, plasma generation, filamentation and effects of incident polarisation on the writing conditions will be presented.

  14. Fiber optic coherent laser radar 3d vision system

    SciTech Connect

    Sebastian, R.L.; Clark, R.B.; Simonson, D.L.

    1994-12-31

    Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic of coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  15. 3D laser traking of a particle in 3DFM

    NASA Astrophysics Data System (ADS)

    Desai, Kalpit; Welch, Gregory; Bishop, Gary; Taylor, Russell; Superfine, Richard

    2003-11-01

    The principal goal of 3D tracking in our home-built 3D Magnetic Force Microscope is to monitor movement of the particle with respect to laser beam waist and keep the particle at the center of laser beam. The sensory element is a Quadrant Photo Diode (QPD) which captures scattering of light caused by particle motion with bandwidth up to 40 KHz. XYZ translation stage is the driver element which moves particle back in the center of the laser with accuracy of couple of nanometers and with bandwidth up to 300 Hz. Since our particles vary in size, composition and shape, instead of using a priori model we use standard system identification techniques to have optimal approximation to the relationship between particle motion and QPD response. We have developed position feedback control system software that is capable of 3-dimensional tracking of beads that are attached to cilia on living cells which are beating at up to 15Hz. We have also modeled the control system of instrument to simulate performance of 3D particle tracking for different experimental conditions. Given operational level of nanometers, noise poses a great challenge for the tracking system. We propose to use stochastic control theory approaches to increase robustness of tracking.

  16. Image based 3D city modeling : Comparative study

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  17. Imaging fault zones using 3D seismic image processing techniques

    NASA Astrophysics Data System (ADS)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  18. Optical 3D watermark based digital image watermarking for telemedicine

    NASA Astrophysics Data System (ADS)

    Li, Xiao Wei; Kim, Seok Tae

    2013-12-01

    Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.

  19. EISCAT Aperture Synthesis Imaging (EASI _3D) for the EISCAT_3D Project

    NASA Astrophysics Data System (ADS)

    La Hoz, Cesar; Belyey, Vasyl

    2012-07-01

    Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. The underlying physico-mathematical principles of the technique are the same as the technique employed in radioastronomy to image stellar objects; both require sophisticated inversion techniques to obtain reliable images.

  20. Faster, higher quality volume visualization for 3D medical imaging

    NASA Astrophysics Data System (ADS)

    Kalvin, Alan D.; Laine, Andrew F.; Song, Ting

    2008-03-01

    The two major volume visualization methods used in biomedical applications are Maximum Intensity Projection (MIP) and Volume Rendering (VR), both of which involve the process of creating sets of 2D projections from 3D images. We have developed a new method for very fast, high-quality volume visualization of 3D biomedical images, based on the fact that the inverse of this process (transforming 2D projections into a 3D image) is essentially equivalent to tomographic image reconstruction. This new method uses the 2D projections acquired by the scanner, thereby obviating the need for the two computationally expensive steps currently required in the complete process of biomedical visualization, that is, (i) reconstructing the 3D image from 2D projection data, and (ii) computing the set of 2D projections from the reconstructed 3D image As well as improvements in computation speed, this method also results in improvements in visualization quality, and in the case of x-ray CT we can exploit this quality improvement to reduce radiation dosage. In this paper, demonstrate the benefits of developing biomedical visualization techniques by directly processing the sensor data acquired by body scanners, rather than by processing the image data reconstructed from the sensor data. We show results of using this approach for volume visualization for tomographic modalities, like x-ray CT, and as well as for MRI.

  1. 3D thermography imaging standardization technique for inflammation diagnosis

    NASA Astrophysics Data System (ADS)

    Ju, Xiangyang; Nebel, Jean-Christophe; Siebert, J. Paul

    2005-01-01

    We develop a 3D thermography imaging standardization technique to allow quantitative data analysis. Medical Digital Infrared Thermal Imaging is very sensitive and reliable mean of graphically mapping and display skin surface temperature. It allows doctors to visualise in colour and quantify temperature changes in skin surface. The spectrum of colours indicates both hot and cold responses which may co-exist if the pain associate with an inflammatory focus excites an increase in sympathetic activity. However, due to thermograph provides only qualitative diagnosis information, it has not gained acceptance in the medical and veterinary communities as a necessary or effective tool in inflammation and tumor detection. Here, our technique is based on the combination of visual 3D imaging technique and thermal imaging technique, which maps the 2D thermography images on to 3D anatomical model. Then we rectify the 3D thermogram into a view independent thermogram and conform it a standard shape template. The combination of these imaging facilities allows the generation of combined 3D and thermal data from which thermal signatures can be quantified.

  2. SNR analysis of 3D magnetic resonance tomosynthesis (MRT) imaging

    NASA Astrophysics Data System (ADS)

    Kim, Min-Oh; Kim, Dong-Hyun

    2012-03-01

    In conventional 3D Fourier transform (3DFT) MR imaging, signal-to-noise ratio (SNR) is governed by the well-known relationship of being proportional to the voxel size and square root of the imaging time. Here, we introduce an alternative 3D imaging approach, termed MRT (Magnetic Resonance Tomosynthesis), which can generate a set of tomographic MR images similar to multiple 2D projection images in x-ray. A multiple-oblique-view (MOV) pulse sequence is designed to acquire the tomography-like images used in tomosynthesis process and an iterative back-projection (IBP) reconstruction method is used to reconstruct 3D images. SNR analysis is performed and shows that resolution and SNR tradeoff is not governed as with typical 3DFT MR imaging case. The proposed method provides a higher SNR than the conventional 3D imaging method with a partial loss of slice-direction resolution. It is expected that this method can be useful for extremely low SNR cases.

  3. ROIC for gated 3D imaging LADAR receiver

    NASA Astrophysics Data System (ADS)

    Chen, Guoqiang; Zhang, Junling; Wang, Pan; Zhou, Jie; Gao, Lei; Ding, Ruijun

    2013-09-01

    Time of flight laser range finding, deep space communications and scanning video imaging are three applications requiring very low noise optical receivers to achieve detection of fast and weak optical signal. HgCdTe electrons initiated avalanche photodiodes (e-APDs) in linear multiplication mode is the detector of choice thanks to its high quantum efficiency, high gain at low bias, high bandwidth and low noise factor. In this project, a readout integrated circuit of hybrid e-APD focal plane array (FPA) with 100um pitch for 3D-LADAR was designed for gated optical receiver. The ROIC works at 77K, including unit cell circuit, column-level circuit, timing control, bias circuit and output driver. The unit cell circuit is a key component, which consists of preamplifier, correlated double Sampling (CDS), bias circuit and timing control module. Specially, the preamplifier used the capacitor feedback transimpedance amplifier (CTIA) structure which has two capacitors to offer switchable capacitance for passive/active dual mode imaging. The main circuit of column-level circuit is a precision Multiply-by-Two circuit which is implemented by switched-capacitor circuit. Switched-capacitor circuit is quite suitable for the signal processing of readout integrated circuit (ROIC) due to the working characteristics. The output driver uses a simply unity-gain buffer. Because the signal is amplified in column-level circuit, the amplifier in unity-gain buffer uses a rail-rail amplifier. In active imaging mode, the integration time is 80ns. Integrating current from 200nA to 4uA, this circuit shows the nonlinearity is less than 1%. In passive imaging mode, the integration time is 150ns. Integrating current from 1nA to 20nA shows the nonlinearity less than 1%.

  4. 3D gesture recognition from serial range image

    NASA Astrophysics Data System (ADS)

    Matsui, Yasuyuki; Miyasaka, Takeo; Hirose, Makoto; Araki, Kazuo

    2001-10-01

    In this research, the recognition of gesture in 3D space is examined by using serial range images obtained by a real-time 3D measurement system developed in our laboratory. Using this system, it is possible to obtain time sequences of range, intensity and color data for a moving object in real-time without assigning markers to the targets. At first, gestures are tracked in 2D space by calculating 2D flow vectors at each points using an ordinal optical flow estimation method, based on time sequences of the intensity data. Then, location of each point after 2D movement is detected on the x-y plane using thus obtained 2D flow vectors. Depth information of each point after movement is then obtained from the range data and 3D flow vectors are assigned to each point. Time sequences of thus obtained 3D flow vectors allow us to track the 3D movement of the target. So, based on time sequences of 3D flow vectors of the targets, it is possible to classify the movement of the targets using continuous DP matching technique. This tracking of 3D movement using time sequences of 3D flow vectors may be applicable for a robust gesture recognition system.

  5. 3D Laser Triangulation for Plant Phenotyping in Challenging Environments

    PubMed Central

    Kjaer, Katrine Heinsvig; Ottosen, Carl-Otto

    2015-01-01

    To increase the understanding of how the plant phenotype is formed by genotype and environmental interactions, simple and robust high-throughput plant phenotyping methods should be developed and considered. This would not only broaden the application range of phenotyping in the plant research community, but also increase the ability for researchers to study plants in their natural environments. By studying plants in their natural environment in high temporal resolution, more knowledge on how multiple stresses interact in defining the plant phenotype could lead to a better understanding of the interaction between plant responses and epigenetic regulation. In the present paper, we evaluate a commercial 3D NIR-laser scanner (PlantEye, Phenospex B.V., Herleen, The Netherlands) to track daily changes in plant growth with high precision in challenging environments. Firstly, we demonstrate that the NIR laser beam of the scanner does not affect plant photosynthetic performance. Secondly, we demonstrate that it is possible to estimate phenotypic variation amongst the growth pattern of ten genotypes of Brassica napus L. (rapeseed), using a simple linear correlation between scanned parameters and destructive growth measurements. Our results demonstrate the high potential of 3D laser triangulation for simple measurements of phenotypic variation in challenging environments and in a high temporal resolution. PMID:26066990

  6. A 3D surface imaging system for assessing human obesity

    NASA Astrophysics Data System (ADS)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  7. 3D Image Display Courses for Information Media Students.

    PubMed

    Yanaka, Kazuhisa; Yamanouchi, Toshiaki

    2016-01-01

    Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators. PMID:26960028

  8. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    NASA Astrophysics Data System (ADS)

    Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.

    2015-08-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.

  9. Hybrid segmentation framework for 3D medical image analysis

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Metaxas, Dimitri N.

    2003-05-01

    Medical image segmentation is the process that defines the region of interest in the image volume. Classical segmentation methods such as region-based methods and boundary-based methods cannot make full use of the information provided by the image. In this paper we proposed a general hybrid framework for 3D medical image segmentation purposes. In our approach we combine the Gibbs Prior model, and the deformable model. First, Gibbs Prior models are applied onto each slice in a 3D medical image volume and the segmentation results are combined to a 3D binary masks of the object. Then we create a deformable mesh based on this 3D binary mask. The deformable model will be lead to the edge features in the volume with the help of image derived external forces. The deformable model segmentation result can be used to update the parameters for Gibbs Prior models. These methods will then work recursively to reach a global segmentation solution. The hybrid segmentation framework has been applied to images with the objective of lung, heart, colon, jaw, tumor, and brain. The experimental data includes MRI (T1, T2, PD), CT, X-ray, Ultra-Sound images. High quality results are achieved with relatively efficient time cost. We also did validation work using expert manual segmentation as the ground truth. The result shows that the hybrid segmentation may have further clinical use.

  10. Optical monitoring of scoliosis by 3D medical laser scanner

    NASA Astrophysics Data System (ADS)

    Rodríguez-Quiñonez, Julio C.; Sergiyenko, Oleg Yu.; Preciado, Luis C. Basaca; Tyrsa, Vera V.; Gurko, Alexander G.; Podrygalo, Mikhail A.; Lopez, Moises Rivas; Balbuena, Daniel Hernandez

    2014-03-01

    Three dimensional recording of the human body surface or anatomical areas have gained importance in many medical applications. In this paper, our 3D Medical Laser Scanner is presented. It is based on the novel principle of dynamic triangulation. We analyze the method of operation, medical applications, orthopedically diseases as Scoliosis and the most common types of skin to employ the system the most proper way. It is analyzed a group of medical problems related to the application of optical scanning in optimal way. Finally, experiments are conducted to verify the performance of the proposed system and its method uncertainty.

  11. 2D/3D Image Registration using Regression Learning

    PubMed Central

    Chou, Chen-Rui; Frederick, Brandon; Mageras, Gig; Chang, Sha; Pizer, Stephen

    2013-01-01

    In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object’s 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region’s motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method’s application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof. PMID:24058278

  12. 3-D Terahertz Synthetic-Aperture Imaging and Spectroscopy

    NASA Astrophysics Data System (ADS)

    Henry, Samuel C.

    Terahertz (THz) wavelengths have attracted recent interest in multiple disciplines within engineering and science. Situated between the infrared and the microwave region of the electromagnetic spectrum, THz energy can propagate through non-polar materials such as clothing or packaging layers. Moreover, many chemical compounds, including explosives and many drugs, reveal strong absorption signatures in the THz range. For these reasons, THz wavelengths have great potential for non-destructive evaluation and explosive detection. Three-dimensional (3-D) reflection imaging with considerable depth resolution is also possible using pulsed THz systems. While THz imaging (especially 3-D) systems typically operate in transmission mode, reflection offers the most practical configuration for standoff detection, especially for objects with high water content (like human tissue) which are opaque at THz frequencies. In this research, reflection-based THz synthetic-aperture (SA) imaging is investigated as a potential imaging solution. THz SA imaging results presented in this dissertation are unique in that a 2-D planar synthetic array was used to generate a 3-D image without relying on a narrow time-window for depth isolation cite [Shen 2005]. Novel THz chemical detection techniques are developed and combined with broadband THz SA capabilities to provide concurrent 3-D spectral imaging. All algorithms are tested with various objects and pressed pellets using a pulsed THz time-domain system in the Northwest Electromagnetics and Acoustics Research Laboratory (NEAR-Lab).

  13. Computerized analysis of pelvic incidence from 3D images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaž; Janssen, Michiel M. A.; Pernuš, Franjo; Castelein, René M.; Viergever, Max A.

    2012-02-01

    The sagittal alignment of the pelvis can be evaluated by the angle of pelvic incidence (PI), which is constant for an arbitrary subject position and orientation and can be therefore compared among subjects in standing, sitting or supine position. In this study, PI was measured from three-dimensional (3D) computed tomography (CT) images of normal subjects that were acquired in supine position. A novel computerized method, based on image processing techniques, was developed to automatically determine the anatomical references required to measure PI, i.e. the centers of the femoral heads in 3D, and the center and inclination of the sacral endplate in 3D. Multiplanar image reformation was applied to obtain perfect sagittal views with all anatomical structures completely in line with the hip axis, from which PI was calculated. The resulting PI (mean+/-standard deviation) was equal to 46.6°+/-9.2° for male subjects (N = 189), 47.6°+/-10.7° for female subjects (N = 181), and 47.1°+/-10.0° for all subjects (N = 370). The obtained measurements of PI from 3D images were not biased by acquisition projection or structure orientation, because all anatomical structures were completely in line with the hip axis. The performed measurements in 3D therefore represent PI according to the actual geometrical relationships among anatomical structures of the sacrum, pelvis and hips, as observed from the perfect sagittal views.

  14. 3D camera assisted fully automated calibration of scanning laser Doppler vibrometers

    NASA Astrophysics Data System (ADS)

    Sels, Seppe; Ribbens, Bart; Mertens, Luc; Vanlanduit, Steve

    2016-06-01

    Scanning laser Doppler vibrometers (LDV) are used to measure full-field vibration shapes of products and structures. In most commercially available scanning laser Doppler vibrometer systems the user manually draws a grid of measurement locations on a 2D camera image of the product. The determination of the correct physical measurement locations can be a time consuming and diffcult task. In this paper we present a new methodology for product testing and quality control that integrates 3D imaging techniques with vibration measurements. This procedure allows to test prototypes in a shorter period because physical measurements locations will be located automatically. The proposed methodology uses a 3D time-of-flight camera to measure the location and orientation of the test-object. The 3D image of the time-of-flight camera is then matched with the 3D-CAD model of the object in which measurement locations are pre-defined. A time of flight camera operates strictly in the near infrared spectrum. To improve the signal to noise ratio in the time-of-flight measurement, a time-of-flight camera uses a band filter. As a result of this filter, the laser spot of most laser vibrometers is invisible in the time-of-flight image. Therefore a 2D RGB-camera is used to find the laser-spot of the vibrometer. The laser spot is matched to the 3D image obtained by the time-of-flight camera. Next an automatic calibration procedure is used to aim the laser at the (pre)defined locations. Another benefit from this methodology is that it incorporates automatic mapping between a CAD model and the vibration measurements. This mapping can be used to visualize measurements directly on a 3D CAD model. Secondly the orientation of the CAD model is known with respect to the laser beam. This information can be used to find the direction of the measured vibration relatively to the surface of the object. With this direction, the vibration measurements can be compared more precisely with numerical

  15. Single 3D cell segmentation from optical CT microscope images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Reeves, Anthony P.

    2014-03-01

    The automated segmentation of the nucleus and cytoplasm regions in 3D optical CT microscope images has been achieved with two methods, a global threshold gradient based approach and a graph-cut approach. For the first method, the first two peaks of a gradient figure of merit curve are selected as the thresholds for cytoplasm and nucleus segmentation. The second method applies a graph-cut segmentation twice: the first identifies the nucleus region and the second identifies the cytoplasm region. Image segmentation of single cells is important for automated disease diagnostic systems. The segmentation methods were evaluated with 200 3D images consisting of 40 samples of 5 different cell types. The cell types consisted of columnar, macrophage, metaplastic and squamous human cells and cultured A549 cancer cells. The segmented cells were compared with both 2D and 3D reference images and the quality of segmentation was determined by the Dice Similarity Coefficient (DSC). In general, the graph-cut method had a superior performance to the gradient-based method. The graph-cut method achieved an average DSC of 86% and 72% for nucleus and cytoplasm segmentations respectively for the 2D reference images and 83% and 75% for the 3D reference images. The gradient method achieved an average DSC of 72% and 51% for nucleus and cytoplasm segmentation for the 2D reference images and 71% and 51% for the 3D reference images. The DSC of cytoplasm segmentation was significantly lower than for the nucleus since the cytoplasm was not differentiated as well by image intensity from the background.

  16. Small SWAP 3D imaging flash ladar for small tactical unmanned air systems

    NASA Astrophysics Data System (ADS)

    Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.

    2015-05-01

    The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.

  17. Integrated optical 3D digital imaging based on DSP scheme

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

    2008-03-01

    We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

  18. Sensor fusion of cameras and a laser for city-scale 3D reconstruction.

    PubMed

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-01-01

    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate. PMID:25375758

  19. Sensor Fusion of Cameras and a Laser for City-Scale 3D Reconstruction

    PubMed Central

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-01-01

    This paper presents a sensor fusion system of cameras and a 2D laser sensor for large-scale 3D reconstruction. The proposed system is designed to capture data on a fast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor, and they are synchronized by a hardware trigger. Reconstruction of 3D structures is done by estimating frame-by-frame motion and accumulating vertical laser scans, as in previous works. However, our approach does not assume near 2D motion, but estimates free motion (including absolute scale) in 3D space using both laser data and image features. In order to avoid the degeneration associated with typical three-point algorithms, we present a new algorithm that selects 3D points from two frames captured by multiple cameras. The problem of error accumulation is solved by loop closing, not by GPS. The experimental results show that the estimated path is successfully overlaid on the satellite images, such that the reconstruction result is very accurate. PMID:25375758

  20. 3D nanotube-based composites produced by laser irradiation

    SciTech Connect

    Ageeva, S A; Bobrinetskii, I I; Nevolin, Vladimir K; Podgaetskii, Vitalii M; Selishchev, S V; Simunin, M M; Konov, Vitalii I; Savranskii, V V; Ponomareva, O V

    2009-04-30

    3D nanocomposites have been fabricated through self-assembly under near-IR cw laser irradiation, using four types of multiwalled and single-walled carbon nanotubes produced by chemical vapour deposition, disproportionation on Fe clusters and cathode sputtering in an inert gas. The composites were prepared by laser irradiation of aqueous solutions of bovine serum albumin until the solvent was evaporated off and a homogeneous black material was obtained: modified albumin reinforced with nanotubes. The consistency of the composites ranged from paste-like to glass-like. Atomic force microscopy was used to study the surface morphology of the nanomaterials. The nanocomposites had a 3D quasi-periodic structure formed by almost spherical or toroidal particles 200-500 nm in diameter and 30-40 nm in visible height. Their inner, quasi-periodic structure was occasionally seen through surface microfractures. The density and hardness of the nanocomposites exceed those of microcrystalline albumin powder by 20% and by a factor of 3-5, respectively. (nanostructures)

  1. A miniature high resolution 3-D imaging sonar.

    PubMed

    Josserand, Tim; Wolley, Jason

    2011-04-01

    This paper discusses the design and development of a miniature, high resolution 3-D imaging sonar. The design utilizes frequency steered phased arrays (FSPA) technology. FSPAs present a small, low-power solution to the problem of underwater imaging sonars. The technology provides a method to build sonars with a large number of beams without the proportional power, circuitry and processing complexity. The design differs from previous methods in that the array elements are manufactured from a monolithic material. With this technique the arrays are flat and considerably smaller element dimensions are achievable which allows for higher frequency ranges and smaller array sizes. In the current frequency range, the demonstrated array has ultra high image resolution (1″ range×1° azimuth×1° elevation) and small size (<3″×3″). The design of the FSPA utilizes the phasing-induced frequency-dependent directionality of a linear phased array to produce multiple beams in a forward sector. The FSPA requires only two hardware channels per array and can be arranged in single and multiple array configurations that deliver wide sector 2-D images. 3-D images can be obtained by scanning the array in a direction perpendicular to the 2-D image field and applying suitable image processing to the multiple scanned 2-D images. This paper introduces the 3-D FSPA concept, theory and design methodology. Finally, results from a prototype array are presented and discussed. PMID:21112066

  2. Real-time multispectral 3-D photoacoustic imaging of blood phantoms

    NASA Astrophysics Data System (ADS)

    Kosik, Ivan; Carson, Jeffrey J. L.

    2013-03-01

    Photoacoustic imaging is exquisitely sensitive to blood and can infer blood oxygenation based on multispectral images. In this work we present multispectral real-time 3D photoacoustic imaging of blood phantoms. We used a custom-built 128-channel hemispherical transducer array coupled to two Nd:YAG pumped OPO laser systems synchronized to provide double pulse excitation at 680 nm and 1064 nm wavelengths, all during a triggered series of ultrasound pressure measurements lasting less than 300 μs. The results demonstrated that 3D PAI is capable of differentiating between oxygenated and deoxygenated blood at high speed at mm-level resolution.

  3. 3D reconstruction based on CT image and its application

    NASA Astrophysics Data System (ADS)

    Zhang, Jianxun; Zhang, Mingmin

    2004-03-01

    Reconstitute the 3-D model of the liver and its internal piping system and simulation of the liver surgical operation can increase the accurate and security of the liver surgical operation, attain a purpose for the biggest limit decrease surgical operation wound, shortening surgical operation time, increasing surgical operation succeeding rate, reducing medical treatment expenses and promoting patient recovering from illness. This text expatiated technology and method that the author constitutes 3-D the model of the liver and its internal piping system and simulation of the liver surgical operation according to the images of CT. The direct volume rendering method establishes 3D the model of the liver. Under the environment of OPENGL adopt method of space point rendering to display liver's internal piping system and simulation of the liver surgical operation. Finally, we adopt the wavelet transform method compressed the medical image data.

  4. 3-D Display Of Magnetic Resonance Imaging Of The Spine

    NASA Astrophysics Data System (ADS)

    Nelson, Alan C.; Kim, Yongmin; Haralick, Robert M.; Anderson, Paul A.; Johnson, Roger H.; DeSoto, Larry A.

    1988-06-01

    The original data is produced through standard magnetic resonance imaging (MRI) procedures with a surface coil applied to the lower back of a normal human subject. The 3-D spine image data consists of twenty-six contiguous slices with 256 x 256 pixels per slice. Two methods for visualization of the 3-D spine are explored. One method utilizes a verifocal mirror system which creates a true 3-D virtual picture of the object. Another method uses a standard high resolution monitor to simultaneously show the three orthogonal sections which intersect at any user-selected point within the object volume. We discuss the application of these systems in assessment of low back pain.

  5. Reconstruction of 3D scenes from sequences of images

    NASA Astrophysics Data System (ADS)

    Niu, Bei; Sang, Xinzhu; Chen, Duo; Cai, Yuanfa

    2013-08-01

    Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D display. It's a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images. The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences. The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object. Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence. For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud model using the external parameters of camera and the point cloud sequence. The point cloud model is then approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model, which can also be used for 3

  6. Wave-CAIPI for Highly Accelerated 3D Imaging

    PubMed Central

    Bilgic, Berkin; Gagoski, Borjan A.; Cauley, Stephen F.; Fan, Audrey P.; Polimeni, Jonathan R.; Grant, P. Ellen; Wald, Lawrence L.; Setsompop, Kawin

    2014-01-01

    Purpose To introduce the Wave-CAIPI (Controlled Aliasing in Parallel Imaging) acquisition and reconstruction technique for highly accelerated 3D imaging with negligible g-factor and artifact penalties. Methods The Wave-CAIPI 3D acquisition involves playing sinusoidal gy and gz gradients during the readout of each kx encoding line, while modifying the 3D phase encoding strategy to incur inter-slice shifts as in 2D-CAIPI acquisitions. The resulting acquisition spreads the aliasing evenly in all spatial directions, thereby taking full advantage of 3D coil sensitivity distribution. By expressing the voxel spreading effect as a convolution in image space, an efficient reconstruction scheme that does not require data gridding is proposed. Rapid acquisition and high quality image reconstruction with Wave-CAIPI is demonstrated for high-resolution magnitude and phase imaging and Quantitative Susceptibility Mapping (QSM). Results Wave-CAIPI enables full-brain gradient echo (GRE) acquisition at 1 mm isotropic voxel size and R=3×3 acceleration with maximum g-factors of 1.08 at 3T, and 1.05 at 7T. Relative to the other advanced Cartesian encoding strategies 2D-CAIPI and Bunched Phase Encoding, Wave-CAIPI yields up to 2-fold reduction in maximum g-factor for 9-fold acceleration at both field strengths. Conclusion Wave-CAIPI allows highly accelerated 3D acquisitions with low artifact and negligible g-factor penalties, and may facilitate clinical application of high-resolution volumetric imaging. PMID:24986223

  7. Automated curved planar reformation of 3D spine images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo

    2005-10-01

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks.

  8. Imaging thin-bed reservoirs with 3-D seismic

    SciTech Connect

    Hardage, B.A.

    1996-12-01

    This article explains how a 3-D seismic data volume, a vertical seismic profile (VSP), electric well logs and reservoir pressure data can be used to image closely stacked thin-bed reservoirs. This interpretation focuses on the Oligocene Frio reservoir in South Texas which has multiple thin-beds spanning a vertical interval of about 3,000 ft.

  9. Practical pseudo-3D registration for large tomographic images

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  10. 3D wavefront image formation for NIITEK GPR

    NASA Astrophysics Data System (ADS)

    Soumekh, Mehrdad; Ton, Tuan; Howard, Pete

    2009-05-01

    The U.S. Department of Defense Humanitarian Demining (HD) Research and Development Program focuses on developing, testing, demonstrating, and validating new technology for immediate use in humanitarian demining operations around the globe. Beginning in the late 1990's, the U.S. Army Countermine Division funded the development of the NIITEK ground penetrating radar (GPR) for detection of anti-tank (AT) landmines. This work is concerned with signal processing algorithms to suppress sources of artifacts in the NIITEK GPR, and formation of three-dimensional (3D) imagery from the resultant data. We first show that the NIITEK GPR data correspond to a 3D Synthetic Aperture Radar (SAR) database. An adaptive filtering method is utilized to suppress ground return and self-induced resonance (SIR) signals that are generated by the interaction of the radar-carrying platform and the transmitted radar signal. We examine signal processing methods to improve the fidelity of imagery for this 3D SAR system using pre-processing methods that suppress Doppler aliasing as well as other side lobe leakage artifacts that are introduced by the radar radiation pattern. The algorithm, known as digital spotlighting, imposes a filtering scheme on the azimuth-compressed SAR data, and manipulates the resultant spectral data to achieve a higher PRF to suppress the Doppler aliasing. We also present the 3D version of the Fourier-based wavefront reconstruction, a computationally-efficient and approximation-free SAR imaging method, for image formation with the NIITEK 3D SAR database.

  11. Optimizing 3D image quality and performance for stereoscopic gaming

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

    2009-02-01

    The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

  12. 3-D object-oriented image analysis of geophysical data

    NASA Astrophysics Data System (ADS)

    Fadel, I.; Kerle, N.; van der Meijde, M.

    2014-07-01

    Geophysical data are the main source of information about the subsurface. Geophysical techniques are, however, highly non-unique in determining specific physical parameters and boundaries of subsurface objects. To obtain actual physical information, an inversion process is often applied, in which measurements at or above the Earth surface are inverted into a 2- or 3-D subsurface spatial distribution of the physical property. Interpreting these models into structural objects, related to physical processes, requires a priori knowledge and expert analysis which is susceptible to subjective choices and is therefore often non-repeatable. In this research, we implemented a recently introduced object-based approach to interpret the 3-D inversion results of a single geophysical technique using the available a priori information and the physical and geometrical characteristics of the interpreted objects. The introduced methodology is semi-automatic and repeatable, and allows the extraction of subsurface structures using 3-D object-oriented image analysis (3-D OOA) in an objective knowledge-based classification scheme. The approach allows for a semi-objective setting of thresholds that can be tested and, if necessary, changed in a very fast and efficient way. These changes require only changing the thresholds used in a so-called ruleset, which is composed of algorithms that extract objects from a 3-D data cube. The approach is tested on a synthetic model, which is based on a priori knowledge on objects present in the study area (Tanzania). Object characteristics and thresholds were well defined in a 3-D histogram of velocity versus depth, and objects were fully retrieved. The real model results showed how 3-D OOA can deal with realistic 3-D subsurface conditions in which the boundaries become fuzzy, the object extensions become unclear and the model characteristics vary with depth due to the different physical conditions. As expected, the 3-D histogram of the real data was

  13. Practical applications of 3D sonography in gynecologic imaging.

    PubMed

    Andreotti, Rochelle F; Fleischer, Arthur C

    2014-11-01

    Volume imaging in the pelvis has been well demonstrated to be an extremely useful technique, largely based on its ability to reconstruct the coronal plane of the uterus that usually cannot be visualized using traditional 2-dimensional (2D) imaging. As a result, this technique is now a part of the standard pelvic ultrasound protocol in many institutions. A variety of valuable applications of 3D sonography in the pelvis are discussed in this article. PMID:25444101

  14. 3D Winding Number: Theory and Application to Medical Imaging

    PubMed Central

    Becciu, Alessandro; Fuster, Andrea; Pottek, Mark; van den Heuvel, Bart; ter Haar Romeny, Bart; van Assen, Hans

    2011-01-01

    We develop a new formulation, mathematically elegant, to detect critical points of 3D scalar images. It is based on a topological number, which is the generalization to three dimensions of the 2D winding number. We illustrate our method by considering three different biomedical applications, namely, detection and counting of ovarian follicles and neuronal cells and estimation of cardiac motion from tagged MR images. Qualitative and quantitative evaluation emphasizes the reliability of the results. PMID:21317978

  15. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  16. 3-D laser anemometer measurements in a labyrinth seal

    NASA Technical Reports Server (NTRS)

    Morrison, G. L.; Tatterson, G. B.; Johnson, M. C.

    1988-01-01

    The flow field inside a seven cavity labyrinth seal with a 0.00127 m clearance was measured using a 3-D laser Doppler anemometer system. Through the use of this system, the mean velocity vector and the entire Reynolds stress tensor distributions were measured for the first, third, fifth, and seventh cavities of the seal. There was one large recirculation region present in the cavity for the flow condition tested, Re = 28,000 and Ta = 7,000. The axial and radial mean velocities as well as all of the Reynolds stress term became cavity independent by the third cavity. The azimuthal mean velocity varied from cavity to cavity with its magnitude increasing as the flow progressed downstream.

  17. 3D ultrasound image segmentation using wavelet support vector machines

    PubMed Central

    Akbari, Hamed; Fei, Baowei

    2012-01-01

    Purpose: Transrectal ultrasound (TRUS) imaging is clinically used in prostate biopsy and therapy. Segmentation of the prostate on TRUS images has many applications. In this study, a three-dimensional (3D) segmentation method for TRUS images of the prostate is presented for 3D ultrasound-guided biopsy. Methods: This segmentation method utilizes a statistical shape, texture information, and intensity profiles. A set of wavelet support vector machines (W-SVMs) is applied to the images at various subregions of the prostate. The W-SVMs are trained to adaptively capture the features of the ultrasound images in order to differentiate the prostate and nonprostate tissue. This method consists of a set of wavelet transforms for extraction of prostate texture features and a kernel-based support vector machine to classify the textures. The voxels around the surface of the prostate are labeled in sagittal, coronal, and transverse planes. The weight functions are defined for each labeled voxel on each plane and on the model at each region. In the 3D segmentation procedure, the intensity profiles around the boundary between the tentatively labeled prostate and nonprostate tissue are compared to the prostate model. Consequently, the surfaces are modified based on the model intensity profiles. The segmented prostate is updated and compared to the shape model. These two steps are repeated until they converge. Manual segmentation of the prostate serves as the gold standard and a variety of methods are used to evaluate the performance of the segmentation method. Results: The results from 40 TRUS image volumes of 20 patients show that the Dice overlap ratio is 90.3% ± 2.3% and that the sensitivity is 87.7% ± 4.9%. Conclusions: The proposed method provides a useful tool in our 3D ultrasound image-guided prostate biopsy and can also be applied to other applications in the prostate. PMID:22755682

  18. Fragmentary area repairing on the edge of 3D laser point cloud based on edge extracting of images and LS-SVM

    NASA Astrophysics Data System (ADS)

    Zhao, Ziming; Hao, Xiangyang; Liu, Songlin; Zhao, Song

    2011-06-01

    In the process of hole-repairing in point cloud, it's difficult to repair by the indeterminate boundary of fragmentary area in the edge of point cloud. In view of this condition, the article advances a method of Fragmentary area repairing on the edge of point cloud based on edge extracting of image and LS-SVM. After the registration of point cloud and corresponding image, the sub-pixel edge can be extracted from the image. Then project the training points and sub-pixel edge to the characteristic plane that has being constructed to confirm the bound and position for re-sampling. At last get the equation of fragmentary area to accomplish the repairing by Least-Squares Support Vector Machines. The experimental results demonstrate that the method guarantees accurate fine repairing.

  19. Fully automatic and robust 3D registration of serial-section microscopic images.

    PubMed

    Wang, Ching-Wei; Budiman Gosno, Eric; Li, Yen-Sheng

    2015-01-01

    Robust and fully automatic 3D registration of serial-section microscopic images is critical for detailed anatomical reconstruction of large biological specimens, such as reconstructions of dense neuronal tissues or 3D histology reconstruction to gain new structural insights. However, robust and fully automatic 3D image registration for biological data is difficult due to complex deformations, unbalanced staining and variations on data appearance. This study presents a fully automatic and robust 3D registration technique for microscopic image reconstruction, and we demonstrate our method on two ssTEM datasets of drosophila brain neural tissues, serial confocal laser scanning microscopic images of a drosophila brain, serial histopathological images of renal cortical tissues and a synthetic test case. The results show that the presented fully automatic method is promising to reassemble continuous volumes and minimize artificial deformations for all data and outperforms four state-of-the-art 3D registration techniques to consistently produce solid 3D reconstructed anatomies with less discontinuities and deformations. PMID:26449756

  20. Fully automatic and robust 3D registration of serial-section microscopic images

    PubMed Central

    Wang, Ching-Wei; Budiman Gosno, Eric; Li, Yen-Sheng

    2015-01-01

    Robust and fully automatic 3D registration of serial-section microscopic images is critical for detailed anatomical reconstruction of large biological specimens, such as reconstructions of dense neuronal tissues or 3D histology reconstruction to gain new structural insights. However, robust and fully automatic 3D image registration for biological data is difficult due to complex deformations, unbalanced staining and variations on data appearance. This study presents a fully automatic and robust 3D registration technique for microscopic image reconstruction, and we demonstrate our method on two ssTEM datasets of drosophila brain neural tissues, serial confocal laser scanning microscopic images of a drosophila brain, serial histopathological images of renal cortical tissues and a synthetic test case. The results show that the presented fully automatic method is promising to reassemble continuous volumes and minimize artificial deformations for all data and outperforms four state-of-the-art 3D registration techniques to consistently produce solid 3D reconstructed anatomies with less discontinuities and deformations. PMID:26449756

  1. Image selection in photogrammetric multi-view stereo methods for metric and complete 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Hosseininaveh Ahmadabadian, Ali; Robson, Stuart; Boehm, Jan; Shortis, Mark

    2013-04-01

    Multi-View Stereo (MVS) as a low cost technique for precise 3D reconstruction can be a rival for laser scanners if the scale of the model is resolved. A fusion of stereo imaging equipment with photogrammetric bundle adjustment and MVS methods, known as photogrammetric MVS, can generate correctly scaled 3D models without using any known object distances. Although a huge number of stereo images (e.g. 200 high resolution images from a small object) captured of the object contains redundant data that allows detailed and accurate 3D reconstruction, the capture and processing time is increased when a vast amount of high resolution images are employed. Moreover, some parts of the object are often missing due to the lack of coverage of all areas. These problems demand a logical selection of the most suitable stereo camera views from the large image dataset. This paper presents a method for clustering and choosing optimal stereo or optionally single images from a large image dataset. The approach focusses on the two key steps of image clustering and iterative image selection. The method is developed within a software application called Imaging Network Designer (IND) and tested by the 3D recording of a gearbox and three metric reference objects. A comparison is made between IND and CMVS, which is a free package for selecting vantage images. The final 3D models obtained from the IND and CMVS approaches are compared with datasets generated with an MMDx Nikon Laser scanner. Results demonstrate that IND can provide a better image selection for MVS than CMVS in terms of surface coordinate uncertainty and completeness.

  2. 1024 pixels single photon imaging array for 3D ranging

    NASA Astrophysics Data System (ADS)

    Bellisai, S.; Guerrieri, F.; Tisa, S.; Zappa, F.; Tosi, A.; Giudice, A.

    2011-01-01

    Three dimensions (3D) acquisition systems are driving applications in many research field. Nowadays 3D acquiring systems are used in a lot of applications, such as cinema industry or in automotive (for active security systems). Depending on the application, systems present different features, for example color sensitivity, bi-dimensional image resolution, distance measurement accuracy and acquisition frame rate. The system we developed acquires 3D movie using indirect Time of Flight (iTOF), starting from phase delay measurement of a sinusoidally modulated light. The system acquires live movie with a frame rate up to 50frame/s in a range distance between 10 cm up to 7.5 m.

  3. 3-D segmentation of human sternum in lung MDCT images.

    PubMed

    Pazokifard, Banafsheh; Sowmya, Arcot

    2013-01-01

    A fully automatic novel algorithm is presented for accurate 3-D segmentation of the human sternum in lung multi detector computed tomography (MDCT) images. The segmentation result is refined by employing active contours to remove calcified costal cartilage that is attached to the sternum. For each dataset, costal notches (sternocostal joints) are localized in 3-D by using a sternum mask and positions of the costal notches on it as reference. The proposed algorithm for sternum segmentation was tested on 16 complete lung MDCT datasets and comparison of the segmentation results to the reference delineation provided by a radiologist, shows high sensitivity (92.49%) and specificity (99.51%) and small mean distance (dmean=1.07 mm). Total average of the Euclidean distance error for costal notches positioning in 3-D is 4.2 mm. PMID:24110446

  4. Incremental volume reconstruction and rendering for 3-D ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Ohbuchi, Ryutarou; Chen, David; Fuchs, Henry

    1992-09-01

    In this paper, we present approaches toward an interactive visualization of a real time input, applied to 3-D visualizations of 2-D ultrasound echography data. The first, 3 degrees-of- freedom (DOF) incremental system visualizes a 3-D volume acquired as a stream of 2-D slices with location and orientation with 3 DOF. As each slice arrives, the system reconstructs a regular 3-D volume and renders it. Rendering is done by an incremental image-order ray- casting algorithm which stores and reuses the results of expensive resampling along the rays for speed. The second is our first experiment toward real-time 6 DOF acquisition and visualization. Two-dimensional slices with 6 DOF are reconstructed off-line, and visualized at an interactive rate using a parallel volume rendering code running on the graphics multicomputer Pixel-Planes 5.

  5. Automatic needle segmentation in 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Ding, Mingyue; Cardinal, H. Neale; Guan, Weiguang; Fenster, Aaron

    2002-05-01

    In this paper, we propose to use 2D image projections to automatically segment a needle in a 3D ultrasound image. This approach is motivated by the twin observations that the needle is more conspicuous in a projected image, and its projected area is a minimum when the rays are cast parallel to the needle direction. To avoid the computational burden of an exhaustive 2D search for the needle direction, a faster 1D search procedure is proposed. First, a plane which contains the needle direction is determined by the initial projection direction and the (estimated) direction of the needle in the corresponding projection image. Subsequently, an adaptive 1D search technique is used to adjust the projection direction iteratively until the projected needle area is minimized. In order to remove noise and complex background structure from the projection images, a priori information about the needle position and orientation is used to crop the 3D volume, and the cropped volume is rendered with Gaussian transfer functions. We have evaluated this approach experimentally using agar and turkey breast phantoms. The results show that it can find the 3D needle orientation within 1 degree, in about 1 to 3 seconds on a 500 MHz computer.

  6. Vhrs Stereo Images for 3d Modelling of Buildings

    NASA Astrophysics Data System (ADS)

    Bujakiewicz, A.; Holc, M.

    2012-07-01

    The paper presents the project which was carried out in the Photogrammetric Laboratory of Warsaw University of Technology. The experiment is concerned with the extraction of 3D vector data for buildings creation from 3D photogrammetric model based on the Ikonos stereo images. The model was reconstructed with photogrammetric workstation - Summit Evolution combined with ArcGIS 3D platform. Accuracy of 3D model was significantly improved by use for orientation of pair of satellite images the stereo measured tie points distributed uniformly around the model area in addition to 5 control points. The RMS for model reconstructed on base of the RPC coefficients only were 16,6 m, 2,7 m and 47,4 m, for X, Y and Z coordinates, respectively. By addition of 5 control points the RMS were improved to 0,7 m, 0,7 m 1,0 m, where the best results were achieved when RMS were estimated from deviations in 17 check points (with 5 control points)and amounted to 0,4 m, 0,5 m and 0,6 m, for X, Y, and Z respectively. The extracted 3D vector data for buildings were integrated with 2D data of the ground footprints and afterwards they were used for 3D modelling of buildings in Google SketchUp software. The final results were compared with the reference data obtained from other sources. It was found that the shape of buildings (in concern to the number of details) had been reconstructed on level of LoD1, when the accuracy of these models corresponded to the level of LoD2.

  7. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    PubMed

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement. PMID:27093439

  8. Large distance 3D imaging of hidden objects

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  9. Quantitative 3-D imaging topogrammetry for telemedicine applications

    NASA Technical Reports Server (NTRS)

    Altschuler, Bruce R.

    1994-01-01

    precision micro-sewing machines, splice neural connections with laser welds, micro-bore through constricted vessels, and computer combine ultrasound, microradiography, and 3-D mini-borescopes to quickly assess and trace vascular problems in situ. The spatial relationships between organs, robotic arms, and end-effector diagnostic, manipulative, and surgical instruments would be constantly monitored by the robot 'brain' using inputs from its multiple 3-D quantitative 'eyes' remote sensing, as well as by contact and proximity force measuring devices. Methods to create accurate and quantitative 3-D topograms at continuous video data rates are described.

  10. Hands-on guide for 3D image creation for geological purposes

    NASA Astrophysics Data System (ADS)

    Frehner, Marcel; Tisato, Nicola

    2013-04-01

    Geological structures in outcrops or hand specimens are inherently three dimensional (3D), and therefore better understandable if viewed in 3D. While 3D models can easily be created, manipulated, and looked at from all sides on the computer screen (e.g., using photogrammetry or laser scanning data), 3D visualizations for publications or conference posters are much more challenging as they have to live in a 2D-world (i.e., on a sheet of paper). Perspective 2D visualizations of 3D models do not fully transmit the "feeling and depth of the third dimension" to the audience; but this feeling is desirable for a better examination and understanding in 3D of the structure under consideration. One of the very few possibilities to generate real 3D images, which work on a 2D display, is by using so-called stereoscopic images. Stereoscopic images are two images of the same object recorded from two slightly offset viewpoints. Special glasses and techniques have to be used to make sure that one image is seen only by one eye, and the other image is seen by the other eye, which together lead to the "3D effect". Geoscientists are often familiar with such 3D images. For example, geomorphologists traditionally view stereographic orthophotos by employing a mirror-steroscope. Nowadays, petroleum-geoscientists examine high-resolution 3D seismic data sets in special 3D visualization rooms. One of the methods for generating and viewing a stereoscopic image, which does not require a high-tech viewing device, is to create a so-called anaglyph. The principle is to overlay two images saturated in red and cyan, respectively. The two images are then viewed through red-cyan-stereoscopic glasses. This method is simple and cost-effective, but has some drawbacks in preserving colors accurately. A similar method is used in 3D movies, where polarized light or shuttering techniques are used to separate the left from the right image, which allows preserving the original colors. The advantage of red

  11. Automated reconstruction of 3D scenes from sequences of images

    NASA Astrophysics Data System (ADS)

    Pollefeys, M.; Koch, R.; Vergauwen, M.; Van Gool, L.

    Modelling of 3D objects from image sequences is a challenging problem and has been an important research topic in the areas of photogrammetry and computer vision for many years. In this paper, a system is presented which automatically extracts a textured 3D surface model from a sequence of images of a scene. The system can deal with unknown camera settings. In addition, the parameters of this camera are allowed to change during acquisition (e.g., by zooming or focusing). No prior knowledge about the scene is necessary to build the 3D models. Therefore, this system offers a high degree of flexibility. The system is based on state-of-the-art algorithms recently developed in computer vision. The 3D modelling task is decomposed into a number of successive steps. Gradually, more knowledge of the scene and the camera setup is retrieved. At this point, the obtained accuracy is not yet at the level required for most metrology applications, but the visual quality is very convincing. This system has been applied to a number of applications in archaeology. The Roman site of Sagalassos (southwest Turkey) was used as a test case to illustrate the potential of this new approach.

  12. 3D imaging of fetus vertebra by synchrotron radiation microtomography

    NASA Astrophysics Data System (ADS)

    Peyrin, Francoise; Pateyron-Salome, Murielle; Denis, Frederic; Braillon, Pierre; Laval-Jeantet, Anne-Marie; Cloetens, Peter

    1997-10-01

    A synchrotron radiation computed microtomography system allowing high resolution 3D imaging of bone samples has been developed at ESRF. The system uses a high resolution 2D detector based on a CCd camera coupled to a fluorescent screen through light optics. The spatial resolution of the device is particularly well adapted to the imaging of bone structure. In view of studying growth, vertebra samples of fetus with differential gestational ages were imaged. The first results show that fetus vertebra is quite different from adult bone both in terms of density and organization.

  13. Texture blending on 3D models using casual images

    NASA Astrophysics Data System (ADS)

    Liu, Xingming; Liu, Xiaoli; Li, Ameng; Liu, Junyao; Wang, Huijing

    2013-12-01

    In this paper, a method for constructing photorealistic textured model using 3D structured light digitizer is presented. Our method acquisition of range images and texture images around object, and range images are registered and integrated to construct geometric model of object. System is calibrated and poses of texture-camera are determined so that the relationship between texture and geometric model is established. After that, a global optimization is applied to assign compatible texture to adjacent surface and followed with a level procedure to remove artifacts due to vary lighting, approximate geometric model and so on. Lastly, we demonstrate the effect of our method on constructing a real model of world.

  14. Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko

    2008-03-01

    The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).

  15. Image Appraisal for 2D and 3D Electromagnetic Inversion

    SciTech Connect

    Alumbaugh, D.L.; Newman, G.A.

    1999-01-28

    Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and posterior model covariance matrices can be directly calculated. A method to examine how the horizontal and vertical resolution varies spatially within the electromagnetic property image is developed by examining the columns of the model resolution matrix. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how errors in the inversion process such as data noise and incorrect a priori assumptions about the imaged model map into parameter error. This type of image is shown to be useful in analyzing spatial variations in the image sensitivity to the data. A method is analyzed for statistically estimating the model covariance matrix when the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion). A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on 2D and 3D synthetic cross well EM data sets, as well as a field data set collected at the Lost Hills Oil Field in Central California.

  16. Right main bronchus perforation detected by 3D-image

    PubMed Central

    Bense, László; Eklund, Gunnar; Jorulf, Hakan; Farkas, Árpád; Balásházy, Imre; Hedenstierna, Göran; Krebsz, Ádám; Madas, Balázs Gergely; Strindberg, Jerker Eden

    2011-01-01

    A male metal worker, who has never smoked, contracted debilitating dyspnoea in 2003 which then deteriorated until 2007. Spirometry and chest x-rays provided no diagnosis. A 3D-image of the airways was reconstructed from a high-resolution CT (HRCT) in 2007, showing peribronchial air on the right side, mostly along the presegmental airways. After digital subtraction of the image of the peribronchial air, a hole on the cranial side of the right main bronchus was detected. The perforation could be identified at the re-examination of HRCTs in 2007 and 2009, but not in 2010 when it had possibly healed. The occupational exposure of the patient to evaporating chemicals might have contributed to the perforation and hampered its healing. A 3D HRCT reconstruction should be considered to detect bronchial anomalies, including wall-perforation, when unexplained dyspnoea or other chest symptoms call for extended investigation. PMID:22679238

  17. Ultra-High Resolution 3D Imaging of Whole Cells.

    PubMed

    Huang, Fang; Sirinakis, George; Allgeyer, Edward S; Schroeder, Lena K; Duim, Whitney C; Kromann, Emil B; Phan, Thomy; Rivera-Molina, Felix E; Myers, Jordan R; Irnov, Irnov; Lessard, Mark; Zhang, Yongdeng; Handel, Mary Ann; Jacobs-Wagner, Christine; Lusk, C Patrick; Rothman, James E; Toomre, Derek; Booth, Martin J; Bewersdorf, Joerg

    2016-08-11

    Fluorescence nanoscopy, or super-resolution microscopy, has become an important tool in cell biological research. However, because of its usually inferior resolution in the depth direction (50-80 nm) and rapidly deteriorating resolution in thick samples, its practical biological application has been effectively limited to two dimensions and thin samples. Here, we present the development of whole-cell 4Pi single-molecule switching nanoscopy (W-4PiSMSN), an optical nanoscope that allows imaging of three-dimensional (3D) structures at 10- to 20-nm resolution throughout entire mammalian cells. We demonstrate the wide applicability of W-4PiSMSN across diverse research fields by imaging complex molecular architectures ranging from bacteriophages to nuclear pores, cilia, and synaptonemal complexes in large 3D cellular volumes. PMID:27397506

  18. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Smith, Philip W.

    1993-01-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  19. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  20. 3D VSP imaging in the Deepwater GOM

    NASA Astrophysics Data System (ADS)

    Hornby, B. E.

    2005-05-01

    Seismic imaging challenges in the Deepwater GOM include surface and sediment related multiples and issues arising from complicated salt bodies. Frequently, wells encounter geologic complexity not resolved on conventional surface seismic section. To help address these challenges BP has been acquiring 3D VSP (Vertical Seismic Profile) surveys in the Deepwater GOM. The procedure involves placing an array of seismic sensors in the borehole and acquiring a 3D seismic dataset with a surface seismic gunboat that fires airguns in a spiral pattern around the wellbore. Placing the seismic geophones in the borehole provides a higher resolution and more accurate image near the borehole, as well as other advantages relating to the unique position of the sensors relative to complex structures. Technical objectives are to complement surface seismic with improved resolution (~2X seismic), better high dip structure definition (e.g. salt flanks) and to fill in "imaging holes" in complex sub-salt plays where surface seismic is blind. Business drivers for this effort are to reduce risk in well placement, improved reserve calculation and understanding compartmentalization and stratigraphic variation. To date, BP has acquired 3D VSP surveys in ten wells in the DW GOM. The initial results are encouraging and show both improved resolution and structural images in complex sub-salt plays where the surface seismic is blind. In conjunction with this effort BP has influenced both contractor borehole seismic tool design and developed methods to enable the 3D VSP surveys to be conducted offline thereby avoiding the high daily rig costs associated with a Deepwater drilling rig.

  1. 3D tongue motion from tagged and cine MR images.

    PubMed

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z; Lee, Junghoon; Stone, Maureen; Prince, Jerry L

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach suffers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information to yield improved estimation of 3D tongue motion. The method uses the harmonic phase (HARP) algorithm to extract motion from tags and diffeomorphic demons to provide surface deformation. It then uses an incompressible deformation estimation algorithm to incorporate both sources of displacement information to form an estimate of the 3D whole tongue motion. Experimental results show that use of combined information improves motion estimation near the tongue surface, a problem that has previously been reported as problematic in HARP analysis, while preserving accurate internal motion estimates. Results on both normal and abnormal tongue motions are shown. PMID:24505742

  2. Discrete Method of Images for 3D Radio Propagation Modeling

    NASA Astrophysics Data System (ADS)

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  3. Radiometric Quality Evaluation of INSAT-3D Imager Data

    NASA Astrophysics Data System (ADS)

    Prakash, S.; Jindal, D.; Badal, N.; Kartikeyan, B.; Gopala Krishna, B.

    2014-11-01

    INSAT-3D is an advanced meteorological satellite of ISRO which acquires imagery in optical and infra-red (IR) channels for study of weather dynamics in Indian sub-continent region. In this paper, methodology of radiometric quality evaluation for Level-1 products of Imager, one of the payloads onboard INSAT-3D, is described. Firstly, overall visual quality of scene in terms of dynamic range, edge sharpness or modulation transfer function (MTF), presence of striping and other image artefacts is computed. Uniform targets in Desert and Sea region are identified for which detailed radiometric performance evaluation for IR channels is carried out. Mean brightness temperature (BT) of targets is computed and validated with independently generated radiometric references. Further, diurnal/seasonal trends in target BT values and radiometric uncertainty or sensor noise are studied. Results of radiometric quality evaluation over duration of eight months (January to August 2014) and comparison of radiometric consistency pre/post yaw flip of satellite are presented. Radiometric Analysis indicates that INSAT-3D images have high contrast (MTF > 0.2) and low striping effects. A bias of <4K is observed in the brightness temperature values of TIR-1 channel measured during January-August 2014 indicating consistent radiometric calibration. Diurnal and seasonal analysis shows that Noise equivalent differential temperature (NEdT) for IR channels is consistent and well within specifications.

  4. Automated Identification of Fiducial Points on 3D Torso Images

    PubMed Central

    Kawale, Manas M; Reece, Gregory P; Crosby, Melissa A; Beahm, Elisabeth K; Fingeret, Michelle C; Markey, Mia K; Merchant, Fatima A

    2013-01-01

    Breast reconstruction is an important part of the breast cancer treatment process for many women. Recently, 2D and 3D images have been used by plastic surgeons for evaluating surgical outcomes. Distances between different fiducial points are frequently used as quantitative measures for characterizing breast morphology. Fiducial points can be directly marked on subjects for direct anthropometry, or can be manually marked on images. This paper introduces novel algorithms to automate the identification of fiducial points in 3D images. Automating the process will make measurements of breast morphology more reliable, reducing the inter- and intra-observer bias. Algorithms to identify three fiducial points, the nipples, sternal notch, and umbilicus, are described. The algorithms used for localization of these fiducial points are formulated using a combination of surface curvature and 2D color information. Comparison of the 3D co-ordinates of automatically detected fiducial points and those identified manually, and geodesic distances between the fiducial points are used to validate algorithm performance. The algorithms reliably identified the location of all three of the fiducial points. We dedicate this article to our late colleague and friend, Dr. Elisabeth K. Beahm. Elisabeth was both a talented plastic surgeon and physician-scientist; we deeply miss her insight and her fellowship. PMID:25288903

  5. Fast 3D fluid registration of brain magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Leporé, Natasha; Chou, Yi-Yu; Lopez, Oscar L.; Aizenstein, Howard J.; Becker, James T.; Toga, Arthur W.; Thompson, Paul M.

    2008-03-01

    Fluid registration is widely used in medical imaging to track anatomical changes, to correct image distortions, and to integrate multi-modality data. Fluid mappings guarantee that the template image deforms smoothly into the target, without tearing or folding, even when large deformations are required for accurate matching. Here we implemented an intensity-based fluid registration algorithm, accelerated by using a filter designed by Bro-Nielsen and Gramkow. We validated the algorithm on 2D and 3D geometric phantoms using the mean square difference between the final registered image and target as a measure of the accuracy of the registration. In tests on phantom images with different levels of overlap, varying amounts of Gaussian noise, and different intensity gradients, the fluid method outperformed a more commonly used elastic registration method, both in terms of accuracy and in avoiding topological errors during deformation. We also studied the effect of varying the viscosity coefficients in the viscous fluid equation, to optimize registration accuracy. Finally, we applied the fluid registration algorithm to a dataset of 2D binary corpus callosum images and 3D volumetric brain MRIs from 14 healthy individuals to assess its accuracy and robustness.

  6. Femoroacetabular impingement with chronic acetabular rim fracture - 3D computed tomography, 3D magnetic resonance imaging and arthroscopic correlation

    PubMed Central

    Chhabra, Avneesh; Nordeck, Shaun; Wadhwa, Vibhor; Madhavapeddi, Sai; Robertson, William J

    2015-01-01

    Femoroacetabular impingement is uncommonly associated with a large rim fragment of bone along the superolateral acetabulum. We report an unusual case of femoroacetabular impingement (FAI) with chronic acetabular rim fracture. Radiographic, 3D computed tomography, 3D magnetic resonance imaging and arthroscopy correlation is presented with discussion of relative advantages and disadvantages of various modalities in the context of FAI. PMID:26191497

  7. Stereotactic mammography imaging combined with 3D US imaging for image guided breast biopsy

    SciTech Connect

    Surry, K. J. M.; Mills, G. R.; Bevan, K.; Downey, D. B.; Fenster, A.

    2007-11-15

    Stereotactic X-ray mammography (SM) and ultrasound (US) guidance are both commonly used for breast biopsy. While SM provides three-dimensional (3D) targeting information and US provides real-time guidance, both have limitations. SM is a long and uncomfortable procedure and the US guided procedure is inherently two dimensional (2D), requiring a skilled physician for both safety and accuracy. The authors developed a 3D US-guided biopsy system to be integrated with, and to supplement SM imaging. Their goal is to be able to biopsy a larger percentage of suspicious masses using US, by clarifying ambiguous structures with SM imaging. Features from SM and US guided biopsy were combined, including breast stabilization, a confined needle trajectory, and dual modality imaging. The 3D US guided biopsy system uses a 7.5 MHz breast probe and is mounted on an upright SM machine for preprocedural imaging. Intraprocedural targeting and guidance was achieved with real-time 2D and near real-time 3D US imaging. Postbiopsy 3D US imaging allowed for confirmation that the needle was penetrating the target. The authors evaluated 3D US-guided biopsy accuracy of their system using test phantoms. To use mammographic imaging information, they registered the SM and 3D US coordinate systems. The 3D positions of targets identified in the SM images were determined with a target localization error (TLE) of 0.49 mm. The z component (x-ray tube to image) of the TLE dominated with a TLE{sub z} of 0.47 mm. The SM system was then registered to 3D US, with a fiducial registration error (FRE) and target registration error (TRE) of 0.82 and 0.92 mm, respectively. Analysis of the FRE and TRE components showed that these errors were dominated by inaccuracies in the z component with a FRE{sub z} of 0.76 mm and a TRE{sub z} of 0.85 mm. A stereotactic mammography and 3D US guided breast biopsy system should include breast compression for stability and safety and dual modality imaging for target localization

  8. 3D real-time measurement system of seam with laser

    NASA Astrophysics Data System (ADS)

    Huang, Min-shuang; Huang, Jun-fen

    2014-02-01

    3-D Real-time Measurement System of seam outline based on Moiré Projection is proposed and designed. The system is composed of LD, grating, CCD, video A/D, FPGA, DSP and an output interface. The principle and hardware makeup of high-speed and real-time image processing circuit based on a Digital Signal Processor (DSP) and a Field Programmable Gate Array (FPGA) are introduced. Noise generation mechanism in poor welding field conditions is analyzed when Moiré stripes are projected on a welding workpiece surface. Median filter is adopted to smooth the acquired original laser image of seam, and then measurement results of a 3-D outline image of weld groove are provided.

  9. 3D imaging of particle tracks in Solid State Nuclear Track Detectors

    NASA Astrophysics Data System (ADS)

    Wertheim, D.; Gillmore, G.; Brown, L.; Petford, N.

    2009-04-01

    Inhalation of radon gas (222Rn) and associated ionizing decay products is known to cause lung cancer in human. In the U.K., it has been suggested that 3 to 5 % of total lung cancer deaths can be linked to elevated radon concentrations in the home and/or workplace. Radon monitoring in buildings is therefore routinely undertaken in areas of known risk. Indeed, some organisations such as the Radon Council in the UK and the Environmental Protection Agency in the USA, advocate a ‘to test is best' policy. Radon gas occurs naturally, emanating from the decay of 238U in rock and soils. Its concentration can be measured using CR?39 plastic detectors which conventionally are assessed by 2D image analysis of the surface; however there can be some variation in outcomes / readings even in closely spaced detectors. A number of radon measurement methods are currently in use (for examples, activated carbon and electrets) but the most widely used are CR?39 solid state nuclear track?etch detectors (SSNTDs). In this technique, heavily ionizing alpha particles leave tracks in the form of radiation damage (via interaction between alpha particles and the atoms making up the CR?39 polymer). 3D imaging of the tracks has the potential to provide information relating to angle and energy of alpha particles but this could be time consuming. Here we describe a new method for rapid high resolution 3D imaging of SSNTDs. A ‘LEXT' OLS3100 confocal laser scanning microscope was used in confocal mode to successfully obtain 3D image data on four CR?39 plastic detectors. 3D visualisation and image analysis enabled characterisation of track features. This method may provide a means of rapid and detailed 3D analysis of SSNTDs. Keywords: Radon; SSNTDs; confocal laser scanning microscope; 3D imaging; LEXT

  10. Virtual image display as a backlight for 3D.

    PubMed

    Travis, Adrian; MacCrann, Niall; Emerton, Neil; Kollin, Joel; Georgiou, Andreas; Lanier, Jaron; Bathiche, Stephen

    2013-07-29

    We describe a device which has the potential to be used both as a virtual image display and as a backlight. The pupil of the emitted light fills the device approximately to its periphery and the collimated emission can be scanned both horizontally and vertically in the manner needed to illuminate an eye in any position. The aim is to reduce the power needed to illuminate a liquid crystal panel but also to enable a smooth transition from 3D to a virtual image as the user nears the screen. PMID:23938645

  11. 3D imaging of soil pore network: two different approaches

    NASA Astrophysics Data System (ADS)

    Matrecano, M.; Di Matteo, B.; Mele, G.; Terribile, F.

    2009-04-01

    Pore geometry imaging and its quantitative description is a key factor for advances in the knowledge of physical, chemical and biological soil processes. For many years photos from flattened surfaces of undisturbed soil samples impregnated with fluorescent resin and from soil thin sections under microscope have been the only way available for exploring pore architecture at different scales. Earlier 3D representations of the internal structure of the soil based on not destructive methods have been obtained using medical tomographic systems (NMR and X-ray CT). However, images provided using such equipments, show strong limitations in terms of spatial resolution. In the last decade very good results have then been obtained using imaging from very expensive systems based on synchrotron radiation. More recently, X-ray Micro-Tomography has resulted the most widely applied being the technique showing the best compromise between costs, resolution and size of the images. Conversely, the conceptually simpler but destructive method of "serial sectioning" has been progressively neglected for technical problems in sample preparation and time consumption needed to obtain an adequate number of serial sections for correct 3D reconstruction of soil pore geometry. In this work a comparison between the two methods above has been carried out in order to define advantages, shortcomings and to point out their different potential. A cylindrical undisturbed soil sample 6.5cm in diameter and 6.5cm height of an Ap horizon of an alluvial soil showing vertic characteristics, has been reconstructed using both a desktop X-ray micro-tomograph Skyscan 1172 and the new automatic serial sectioning system SSAT (Sequential Section Automatic Tomography) set up at CNR ISAFOM in Ercolano (Italy) with the aim to overcome most of the typical limitations of such a technique. Image best resolution of 7.5 µm per voxel resulted using X-ray Micro CT while 20 µm was the best value using the serial sectioning

  12. Automatic structural matching of 3D image data

    NASA Astrophysics Data System (ADS)

    Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor

    2015-10-01

    A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.

  13. Mesh generation from 3D multi-material images.

    PubMed

    Boltcheva, Dobrina; Yvinec, Mariette; Boissonnat, Jean-Daniel

    2009-01-01

    The problem of generating realistic computer models of objects represented by 3D segmented images is important in many biomedical applications. Labelled 3D images impose particular challenges for meshing algorithms because multi-material junctions form features such as surface pacthes, edges and corners which need to be preserved into the output mesh. In this paper, we propose a feature preserving Delaunay refinement algorithm which can be used to generate high-quality tetrahedral meshes from segmented images. The idea is to explicitly sample corners and edges from the input image and to constrain the Delaunay refinement algorithm to preserve these features in addition to the surface patches. Our experimental results on segmented medical images have shown that, within a few seconds, the algorithm outputs a tetrahedral mesh in which each material is represented as a consistent submesh without gaps and overlaps. The optimization property of the Delaunay triangulation makes these meshes suitable for the purpose of realistic visualization or finite element simulations. PMID:20426123

  14. Ultra-Compact, High-Resolution LADAR System for 3D Imaging

    NASA Technical Reports Server (NTRS)

    Xu, Jing; Gutierrez, Roman

    2009-01-01

    An eye-safe LADAR system weighs under 500 grams and has range resolution of 1 mm at 10 m. This laser uses an adjustable, tiny microelectromechanical system (MEMS) mirror that was made in SiWave to sweep laser frequency. The size of the laser device is small (70x50x13 mm). The LADAR uses all the mature fiber-optic telecommunication technologies in the system, making this innovation an efficient performer. The tiny size and light weight makes the system useful for commercial and industrial applications including surface damage inspections, range measurements, and 3D imaging.

  15. Development of an algorithm to measure defect geometry using a 3D laser scanner

    NASA Astrophysics Data System (ADS)

    Kilambi, S.; Tipton, S. M.

    2012-08-01

    Current fatigue life prediction models for coiled tubing (CT) require accurate measurements of the defect geometry. Three-dimensional (3D) laser imaging has shown promise toward becoming a nondestructive, non-contacting method of surface defect characterization. Laser imaging provides a detailed photographic image of a flaw, in addition to a detailed 3D surface map from which its critical dimensions can be measured. This paper describes algorithms to determine defect characteristics, specifically depth, width, length and projected cross-sectional area. Curve-fitting methods were compared and implicit algebraic fits have higher probability of convergence compared to explicit geometric fits. Among the algebraic fits, the Taubin circle fit has the least error. The algorithm was able to extract the dimensions of the flaw geometry from the scanned data of CT to within a tolerance of about 0.127 mm, close to the tolerance specified for the laser scanner itself, compared to measurements made using traveling microscopes. The algorithm computes the projected surface area of the flaw, which could previously only be estimated from the dimension measurements and the assumptions made about cutter shape. Although shadows compromised the accuracy of the shape characterization, especially for deep and narrow flaws, the results indicate that the algorithm with laser scanner can be used for non-destructive evaluation of CT in the oil field industry. Further work is needed to improve accuracy, to eliminate shadow effects and to reduce radial deviation.

  16. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    NASA Astrophysics Data System (ADS)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  17. Quality control loop for 3D laser beam cutting

    NASA Astrophysics Data System (ADS)

    Spitznagel, Juergen

    1996-08-01

    Existing systems for computer integrated manufacturing are based on the principle of the process chain: The product runs through different production sections as design, work planning and manufacturing in a sequential order. The data generated by a production sequence are transferred via interface to the following production sequence. These tightly-packed production sequences leave little scope for responding to quality deviations. This deficit is highlighted particularly in 3D laser cutting processes. In order to achieve an optimum machining result, a series of preliminary tests is required. Quality control loops play an important role in restricting the scope of necessary testing to a minimum. The represented control loop contains a CAD- system to design the workpiece, an offline-programming system to develop working strategies and NC/RC-programs as well as a shop-floor oriented tool to record quality data of the workpiece. The systems are coupled by an integrated product model. The control loop feeds quality data back to Operations Planning in the form of rules for processing strategies and technological data, so that the quality of the production process is enhanced. It is intended to supply optimum process parameters, so that the number of preliminary tests can be reduced. On the other hand the control loop contributes quality enhancement measures which serve as rules for the designers.

  18. Towards magnetic 3D x-ray imaging

    NASA Astrophysics Data System (ADS)

    Fischer, Peter; Streubel, R.; Im, M.-Y.; Parkinson, D.; Hong, J.-I.; Schmidt, O. G.; Makarov, D.

    2014-03-01

    Mesoscale phenomena in magnetism will add essential parameters to improve speed, size and energy efficiency of spin driven devices. Multidimensional visualization techniques will be crucial to achieve mesoscience goals. Magnetic tomography is of large interest to understand e.g. interfaces in magnetic multilayers, the inner structure of magnetic nanocrystals, nanowires or the functionality of artificial 3D magnetic nanostructures. We have developed tomographic capabilities with magnetic full-field soft X-ray microscopy combining X-MCD as element specific magnetic contrast mechanism, high spatial and temporal resolution due to the Fresnel zone plate optics. At beamline 6.1.2 at the ALS (Berkeley CA) a new rotation stage allows recording an angular series (up to 360 deg) of high precision 2D projection images. Applying state-of-the-art reconstruction algorithms it is possible to retrieve the full 3D structure. We will present results on prototypic rolled-up Ni and Co/Pt tubes and glass capillaries coated with magnetic films and compare to other 3D imaging approaches e.g. in electron microscopy. Supported by BES MSD DOE Contract No. DE-AC02-05-CH11231 and ERC under the EU FP7 program (grant agreement No. 306277).

  19. Feature detection on 3D images of dental imprints

    NASA Astrophysics Data System (ADS)

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  20. Performance prediction for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Rubel, Oleksii; Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2015-10-01

    Performance of denoising based on discrete cosine transform applied to multichannel remote sensing images corrupted by additive white Gaussian noise is analyzed. Images obtained by satellite Earth Observing-1 (EO-1) mission using hyperspectral imager instrument (Hyperion) that have high input SNR are taken as test images. Denoising performance is characterized by improvement of PSNR. For hard-thresholding 3D DCT-based denoising, simple statistics (probabilities to be less than a certain threshold) are used to predict denoising efficiency using curves fitted into scatterplots. It is shown that the obtained curves (approximations) provide prediction of denoising efficiency with high accuracy. Analysis is carried out for different numbers of channels processed jointly. Universality of prediction for different number of channels is proven.

  1. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  2. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  3. Scattering robust 3D reconstruction via polarized transient imaging.

    PubMed

    Wu, Rihui; Suo, Jinli; Dai, Feng; Zhang, Yongdong; Dai, Qionghai

    2016-09-01

    Reconstructing 3D structure of scenes in the scattering medium is a challenging task with great research value. Existing techniques often impose strong assumptions on the scattering behaviors and are of limited performance. Recently, a low-cost transient imaging system has provided a feasible way to resolve the scene depth, by detecting the reflection instant on the time profile of a surface point. However, in cases with scattering medium, the rays are both reflected and scattered during transmission, and the depth calculated from the time profile largely deviates from the true value. To handle this problem, we used the different polarization behaviors of the reflection and scattering components, and introduced active polarization to separate the reflection component to estimate the scattering robust depth. Our experiments have demonstrated that our approach can accurately reconstruct the 3D structure underlying the scattering medium. PMID:27607944

  4. The 3D model control of image processing

    NASA Technical Reports Server (NTRS)

    Nguyen, An H.; Stark, Lawrence

    1989-01-01

    Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator.

  5. Accuracy evaluation of segmentation for high resolution imagery and 3D laser point cloud data

    NASA Astrophysics Data System (ADS)

    Ni, Nina; Chen, Ninghua; Chen, Jianyu

    2014-09-01

    High resolution satellite imagery and 3D laser point cloud data provide precise geometry, rich spectral information and clear texture of feature. The segmentation of high resolution remote sensing images and 3D laser point cloud is the basis of object-oriented remote sensing image analysis, for the segmentation results will directly influence the accuracy of subsequent analysis and discrimination. Currently, there still lacks a common segmentation theory to support these algorithms. So when we face a specific problem, we should determine applicability of the segmentation method through segmentation accuracy assessment, and then determine an optimal segmentation. To today, the most common method for evaluating the effectiveness of a segmentation method is subjective evaluation and supervised evaluation. For providing a more objective evaluation result, we have carried out following work. Analysis and comparison previous proposed image segmentation accuracy evaluation methods, which are area-based metrics, location-based metrics and combinations metrics. 3D point cloud data, which was gathered by Reigl VZ1000, was used to make two-dimensional transformation of point cloud data. The object-oriented segmentation result of aquaculture farm, building and farmland polygons were used as test object and adopted to evaluate segmentation accuracy.

  6. Calibration of an intensity ratio system for 3D imaging

    NASA Astrophysics Data System (ADS)

    Tsui, H. T.; Tang, K. C.

    1989-03-01

    An intensity ratio method for 3D imaging is proposed with error analysis given for assessment and future improvements. The method is cheap and reasonably fast as it requires no mechanical scanning or laborious correspondence computation. One drawback of the intensity ratio methods which hamper their widespread use is the undesirable change of image intensity. This is usually caused by the difference in reflection from different parts of an object surface and the automatic iris or gain control of the camera. In our method, gray-level patterns used include an uniform pattern, a staircase pattern and a sawtooth pattern to make the system more robust against errors in intensity ratio. 3D information of the surface points of an object can be derived from the intensity ratios of the images by triangulation. A reference back plane is put behind the object to monitor the change in image intensity. Errors due to camera calibration, projector calibration, variations in intensity, imperfection of the slides etc. are analyzed. Early experiments of the system using a newvicon CCTV camera with back plane intensity correction gives a mean-square range error of about 0.5 percent. Extensive analysis of various errors is expected to yield methods for improving the accuracy.

  7. 3D seismic imaging on massively parallel computers

    SciTech Connect

    Womble, D.E.; Ober, C.C.; Oldfield, R.

    1997-02-01

    The ability to image complex geologies such as salt domes in the Gulf of Mexico and thrusts in mountainous regions is a key to reducing the risk and cost associated with oil and gas exploration. Imaging these structures, however, is computationally expensive. Datasets can be terabytes in size, and the processing time required for the multiple iterations needed to produce a velocity model can take months, even with the massively parallel computers available today. Some algorithms, such as 3D, finite-difference, prestack, depth migration remain beyond the capacity of production seismic processing. Massively parallel processors (MPPs) and algorithms research are the tools that will enable this project to provide new seismic processing capabilities to the oil and gas industry. The goals of this work are to (1) develop finite-difference algorithms for 3D, prestack, depth migration; (2) develop efficient computational approaches for seismic imaging and for processing terabyte datasets on massively parallel computers; and (3) develop a modular, portable, seismic imaging code.

  8. Imaging PVC gas pipes using 3-D GPR

    SciTech Connect

    Bradford, J.; Ramaswamy, M.; Peddy, C.

    1996-11-01

    Over the years, many enhancements have been made by the oil and gas industry to improve the quality of seismic images. The GPR project at GTRI borrows heavily from these technologies in order to produce 3-D GPR images of PVC gas pipes. As will be demonstrated, improvements in GPR data acquisition, 3-D processing and visualization schemes yield good images of PVC pipes in the subsurface. Data have been collected in cooperation with the local gas company and at a test facility in Texas. Surveys were conducted over both a metal pipe and PVC pipes of diameters ranging from {1/2} in. to 4 in. at depths from 1 ft to 3 ft in different soil conditions. The metal pipe produced very good reflections and was used to fine tune and optimize the processing run stream. It was found that the following steps significantly improve the overall image: (1) Statics for drift and topography compensation, (2) Deconvolution, (3) Filtering and automatic gain control, (4) Migration for focusing and resolution, and (5) Visualization optimization. The processing flow implemented is relatively straightforward, simple to execute and robust under varying conditions. Future work will include testing resolution limits, effects of soil conditions, and leak detection.

  9. Depth-controlled 3D TV image coding

    NASA Astrophysics Data System (ADS)

    Chiari, Armando; Ciciani, Bruno; Romero, Milton; Rossi, Ricardo

    1998-04-01

    Conventional 3D-TV codecs processing one down-compatible (either left, or right) channel may optionally include the extraction of the disparity field associated with the stereo-pairs to support the coding of the complementary channel. A two-fold improvement over such approaches is proposed in this paper by exploiting 3D features retained in the stereo-pairs to reduce the redundancies in both channels, and according to their visual sensitiveness. Through an a-priori disparity field analysis, our coding scheme separates a region of interest from the foreground/background in the volume space reproduced in order to code them selectively based on their visual relevance. Such a region of interest is here identified as the one which is focused by the shooting device. By suitably scaling the DCT coefficient n such a way that precision is reduced for the image blocks lying on less relevant areas, our approach aims at reducing the signal energy in the background/foreground patterns, while retaining finer details on the more relevant image portions. From an implementation point of view, it is worth noticing that the system proposed keeps its surplus processing power on the encoder side only. Simulation results show such improvements as a better image quality for a given transmission bit rate, or a graceful quality degradation of the reconstructed images with decreasing data-rates.

  10. Ice shelf melt rates and 3D imaging

    NASA Astrophysics Data System (ADS)

    Lewis, Cameron Scott

    Ice shelves are sensitive indicators of climate change and play a critical role in the stability of ice sheets and oceanic currents. Basal melting of ice shelves plays an important role in both the mass balance of the ice sheet and the global climate system. Airborne- and satellite based remote sensing systems can perform thickness measurements of ice shelves. Time separated repeat flight tracks over ice shelves of interest generate data sets that can be used to derive basal melt rates using traditional glaciological techniques. Many previous melt rate studies have relied on surface elevation data gathered by airborne- and satellite based altimeters. These systems infer melt rates by assuming hydrostatic equilibrium, an assumption that may not be accurate, especially near an ice shelf's grounding line. Moderate bandwidth, VHF, ice penetrating radar has been used to measure ice shelf profiles with relatively coarse resolution. This study presents the application of an ultra wide bandwidth (UWB), UHF, ice penetrating radar to obtain finer resolution data on the ice shelves. These data reveal significant details about the basal interface, including the locations and depth of bottom crevasses and deviations from hydrostatic equilibrium. While our single channel radar provides new insight into ice shelf structure, it only images a small swatch of the shelf, which is assumed to be an average of the total shelf behavior. This study takes an additional step by investigating the application of a 3D imaging technique to a data set collected using a ground based multi channel version of the UWB radar. The intent is to show that the UWB radar could be capable of providing a wider swath 3D image of an ice shelf. The 3D images can then be used to obtain a more complete estimate of the bottom melt rates of ice shelves.

  11. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  12. Inflight performance of a second-generation photon-counting 3D imaging lidar

    NASA Astrophysics Data System (ADS)

    Degnan, John; Machan, Roman; Leventhal, Ed; Lawrence, David; Jodor, Gabriel; Field, Christopher

    2008-04-01

    Sigma Space Corporation has recently developed a compact 3D imaging and polarimetric lidar suitable for use in a small aircraft or mini-UAV. A frequency-doubled Nd:YAG microchip laser generates 6 microjoule, subnanosecond pulses at fire rates up to 22 kHz. A Diffractive Optical Element (DOE) breaks the 532 nm beam into a 10x10 array of Gaussian beamlets, each containing about 1 mW of laser power (50 nJ @ 20 kHz). The reflected radiation in each beamlet is imaged by the receive optics onto individual pixels of a high efficiency, 10x10 pixel, multistop detector. Each pixel is then input to one channel of a 100 channel, multistop timer demonstrated to have a 93 picosecond timing (1.4 cm range) resolution and an event recovery time of only 1.6 nsec. Thus, each green laser pulse produces a 100 pixel volumetric 3D image. The residual infrared energy at 1064 nm is used for polarimetry. The scan pattern and frequency of a dual wedge optical scanner, synchronized to the laser fire rate, are tailored to provide contiguous coverage of a ground scene in a single overflight. In both rooftop and preliminary flight tests, the lidar has produced high spatial resolution 3D images of terrain, buildings, tree structures, power lines, and bridges with a data acquisition rate up to 2.2 million multistop 3D pixels per second. Current tests are aimed at defining the lidar's ability to image through water columns and tree canopies.

  13. 3D fluorescence anisotropy imaging using selective plane illumination microscopy.

    PubMed

    Hedde, Per Niklas; Ranjit, Suman; Gratton, Enrico

    2015-08-24

    Fluorescence anisotropy imaging is a popular method to visualize changes in organization and conformation of biomolecules within cells and tissues. In such an experiment, depolarization effects resulting from differences in orientation, proximity and rotational mobility of fluorescently labeled molecules are probed with high spatial resolution. Fluorescence anisotropy is typically imaged using laser scanning and epifluorescence-based approaches. Unfortunately, those techniques are limited in either axial resolution, image acquisition speed, or by photobleaching. In the last decade, however, selective plane illumination microscopy has emerged as the preferred choice for three-dimensional time lapse imaging combining axial sectioning capability with fast, camera-based image acquisition, and minimal light exposure. We demonstrate how selective plane illumination microscopy can be utilized for three-dimensional fluorescence anisotropy imaging of live cells. We further examined the formation of focal adhesions by three-dimensional time lapse anisotropy imaging of CHO-K1 cells expressing an EGFP-paxillin fusion protein. PMID:26368202

  14. 3D fluorescence anisotropy imaging using selective plane illumination microscopy

    PubMed Central

    Hedde, Per Niklas; Ranjit, Suman; Gratton, Enrico

    2015-01-01

    Fluorescence anisotropy imaging is a popular method to visualize changes in organization and conformation of biomolecules within cells and tissues. In such an experiment, depolarization effects resulting from differences in orientation, proximity and rotational mobility of fluorescently labeled molecules are probed with high spatial resolution. Fluorescence anisotropy is typically imaged using laser scanning and epifluorescence-based approaches. Unfortunately, those techniques are limited in either axial resolution, image acquisition speed, or by photobleaching. In the last decade, however, selective plane illumination microscopy has emerged as the preferred choice for three-dimensional time lapse imaging combining axial sectioning capability with fast, camera-based image acquisition, and minimal light exposure. We demonstrate how selective plane illumination microscopy can be utilized for three-dimensional fluorescence anisotropy imaging of live cells. We further examined the formation of focal adhesions by three-dimensional time lapse anisotropy imaging of CHO-K1 cells expressing an EGFP-paxillin fusion protein. PMID:26368202

  15. 3D imaging with a linear light source

    NASA Astrophysics Data System (ADS)

    Lunazzi, José J.; Rivera, Noemí I. R.

    2008-04-01

    In a previous system we showed how the three-dimensionality of an object can be projected and preserved on a diffractive screen, which is just a simple diffractive holographic lens. A transmission object is illuminated with an extended filament of a white light lamp and no additional element is necessary. The system forms three-dimensional (3D) images with normal depth (orthoscopic) of the shadow type. The continuous parallax, perfect sharpness and additional characteristics of the image depend on the width and extension of the luminous filament and the properties of the diffractive lens. This new imaging system is shown to inspire an interesting extension to non-perfect reflective or refractive imaging elements because the sharpness of the image depends only on the width of the source. As new light sources are being developed that may result in very thin linear white light sources, for example, light emitting diodes, it may be useful to further develop this technique. We describe an imaging process in which a rough Fresnel metallic mirror can give a sharp image of an object due to the reduced width of a long filament lamp. We will discuss how the process could be extended to Fresnel lenses or to any aberrating imaging element.

  16. Optimal Image Stitching for Concrete Bridge Bottom Surfaces Aided by 3d Structure Lines

    NASA Astrophysics Data System (ADS)

    Liu, Yahui; Yao, Jian; Liu, Kang; Lu, Xiaohu; Xia, Menghan

    2016-06-01

    Crack detection for bridge bottom surfaces via remote sensing techniques is undergoing a revolution in the last few years. For such applications, a large amount of images, acquired with high-resolution industrial cameras close to the bottom surfaces with some mobile platform, are required to be stitched into a wide-view single composite image. The conventional idea of stitching a panorama with the affine model or the homographic model always suffers a series of serious problems due to poor texture and out-of-focus blurring introduced by depth of field. In this paper, we present a novel method to seamlessly stitch these images aided by 3D structure lines of bridge bottom surfaces, which are extracted from 3D camera data. First, we propose to initially align each image in geometry based on its rough position and orientation acquired with both a laser range finder (LRF) and a high-precision incremental encoder, and these images are divided into several groups with the rough position and orientation data. Secondly, the 3D structure lines of bridge bottom surfaces are extracted from the 3D cloud points acquired with 3D cameras, which impose additional strong constraints on geometrical alignment of structure lines in adjacent images to perform a position and orientation optimization in each group to increase the local consistency. Thirdly, a homographic refinement between groups is applied to increase the global consistency. Finally, we apply a multi-band blending algorithm to generate a large-view single composite image as seamlessly as possible, which greatly eliminates both the luminance differences and the color deviations between images and further conceals image parallax. Experimental results on a set of representative images acquired from real bridge bottom surfaces illustrate the superiority of our proposed approaches.

  17. 3D reconstruction of outdoor environments from omnidirectional range and color images

    NASA Astrophysics Data System (ADS)

    Asai, Toshihiro; Kanbara, Masayuki; Yokoya, Naokazu

    2005-03-01

    This paper describes a 3D modeling method for wide area outdoor environments which is based on integrating omnidirectional range and color images. In the proposed method, outdoor scenes can be efficiently digitized by an omnidirectional laser rangefinder which can obtain a 3D shape with high-accuracy and an omnidirectional multi-camera system (OMS) which can capture a high-resolution color image. Multiple range images are registered by minimizing the distances between corresponding points in the different range images. In order to register multiple range images stably, the points on the plane portions detected from the range data are used in registration process. The position and orientation acquired by the RTK-GPS and the gyroscope are used as initial value of simultaneous registration. The 3D model which is obtained by registration of range data is mapped by the texture selected from omnidirectional images in consideration of the resolution of the texture and occlusions of the model. In experiments, we have carried out 3D modeling of our campus with the proposed method.

  18. Development of 3D microwave imaging reflectometry in LHD (invited).

    PubMed

    Nagayama, Y; Kuwahara, D; Yoshinaga, T; Hamada, Y; Kogi, Y; Mase, A; Tsuchiya, H; Tsuji-Iio, S; Yamaguchi, S

    2012-10-01

    Three-dimensional (3D) microwave imaging reflectometry has been developed in the large helical device to visualize fluctuating reflection surface which is caused by the density fluctuations. The plasma is illuminated by the probe wave with four frequencies, which correspond to four radial positions. The imaging optics makes the image of cut-off surface onto the 2D (7 × 7 channels) horn antenna mixer arrays. Multi-channel receivers have been also developed using micro-strip-line technology to handle many channels at reasonable cost. This system is first applied to observe the edge harmonic oscillation (EHO), which is an MHD mode with many harmonics that appears in the edge plasma. A narrow structure along field lines is observed during EHO. PMID:23126965

  19. Fast 3D visualization of endogenous brain signals with high-sensitivity laser scanning photothermal microscopy.

    PubMed

    Miyazaki, Jun; Iida, Tadatsune; Tanaka, Shinji; Hayashi-Takagi, Akiko; Kasai, Haruo; Okabe, Shigeo; Kobayashi, Takayoshi

    2016-05-01

    A fast, high-sensitivity photothermal microscope was developed by implementing a spatially segmented balanced detection scheme into a laser scanning microscope. We confirmed a 4.9 times improvement in signal-to-noise ratio in the spatially segmented balanced detection compared with that of conventional detection. The system demonstrated simultaneous bi-modal photothermal and confocal fluorescence imaging of transgenic mouse brain tissue with a pixel dwell time of 20 μs. The fluorescence image visualized neurons expressing yellow fluorescence proteins, while the photothermal signal detected endogenous chromophores in the mouse brain, allowing 3D visualization of the distribution of various features such as blood cells and fine structures probably due to lipids. This imaging modality was constructed using compact and cost-effective laser diodes, and will thus be widely useful in the life and medical sciences. PMID:27231615

  20. Fast 3D visualization of endogenous brain signals with high-sensitivity laser scanning photothermal microscopy

    PubMed Central

    Miyazaki, Jun; Iida, Tadatsune; Tanaka, Shinji; Hayashi-Takagi, Akiko; Kasai, Haruo; Okabe, Shigeo; Kobayashi, Takayoshi

    2016-01-01

    A fast, high-sensitivity photothermal microscope was developed by implementing a spatially segmented balanced detection scheme into a laser scanning microscope. We confirmed a 4.9 times improvement in signal-to-noise ratio in the spatially segmented balanced detection compared with that of conventional detection. The system demonstrated simultaneous bi-modal photothermal and confocal fluorescence imaging of transgenic mouse brain tissue with a pixel dwell time of 20 μs. The fluorescence image visualized neurons expressing yellow fluorescence proteins, while the photothermal signal detected endogenous chromophores in the mouse brain, allowing 3D visualization of the distribution of various features such as blood cells and fine structures probably due to lipids. This imaging modality was constructed using compact and cost-effective laser diodes, and will thus be widely useful in the life and medical sciences. PMID:27231615

  1. Density-tapered spiral arrays for ultrasound 3-D imaging.

    PubMed

    Ramalli, Alessandro; Boni, Enrico; Savoia, Alessandro Stuart; Tortoli, Piero

    2015-08-01

    The current high interest in 3-D ultrasound imaging is pushing the development of 2-D probes with a challenging number of active elements. The most popular approach to limit this number is the sparse array technique, which designs the array layout by means of complex optimization algorithms. These algorithms are typically constrained by a few steering conditions, and, as such, cannot guarantee uniform side-lobe performance at all angles. The performance may be improved by the ungridded extensions of the sparse array technique, but this result is achieved at the expense of a further complication of the optimization process. In this paper, a method to design the layout of large circular arrays with a limited number of elements according to Fermat's spiral seeds and spatial density modulation is proposed and shown to be suitable for application to 3-D ultrasound imaging. This deterministic, aperiodic, and balanced positioning procedure attempts to guarantee uniform performance over a wide range of steering angles. The capabilities of the method are demonstrated by simulating and comparing the performance of spiral and dense arrays. A good trade-off for small vessel imaging is found, e.g., in the 60λ spiral array with 1.0λ elements and Blackman density tapering window. Here, the grating lobe level is -16 dB, the lateral resolution is lower than 6λ the depth of field is 120λ and, the average contrast is 10.3 dB, while the sensitivity remains in a 5 dB range for a wide selection of steering angles. The simulation results may represent a reference guide to the design of spiral sparse array probes for different application fields. PMID:26285181

  2. 3D-LZ helicopter ladar imaging system

    NASA Astrophysics Data System (ADS)

    Savage, James; Harrington, Walter; McKinley, R. Andrew; Burns, H. N.; Braddom, Steven; Szoboszlay, Zoltan

    2010-04-01

    A joint-service team led by the Air Force Research Laboratory's Munitions and Sensors Directorates completed a successful flight test demonstration of the 3D-LZ Helicopter LADAR Imaging System. This was a milestone demonstration in the development of technology solutions for a problem known as "helicopter brownout", the loss of situational awareness caused by swirling sand during approach and landing. The 3D-LZ LADAR was developed by H.N. Burns Engineering and integrated with the US Army Aeroflightdynamics Directorate's Brown-Out Symbology System aircraft state symbology aboard a US Army EH-60 Black Hawk helicopter. The combination of these systems provided an integrated degraded visual environment landing solution with landing zone situational awareness as well as aircraft guidance and obstacle avoidance information. Pilots from the U.S. Army, Air Force, Navy, and Marine Corps achieved a 77% landing rate in full brownout conditions at a test range at Yuma Proving Ground, Arizona. This paper will focus on the LADAR technology used in 3D-LZ and the results of this milestone demonstration.

  3. 3D imaging reconstruction and impacted third molars: case reports

    PubMed Central

    Tuzi, Andrea; Di Bari, Roberto; Cicconetti, Andrea

    2012-01-01

    Summary There is a debate in the literature about the need for Computed Tomagraphy (CT) before removing third molars, even if positive radiographic signs are present. In few cases, the third molar is so close to the inferior alveolar nerve that its extraction might expose patients to the risk of post-operative neuro-sensitive alterations of the skin and the mucosa of the homolateral lower lip and chin. Thus, the injury of the inferior alveolar nerve may represent a serious, though infrequent, neurologic complication in the surgery of the third molars rendering necessary a careful pre-operative evaluation of their anatomical relationship with the inferior alveolar nerve by means of radiographic imaging techniques. This contribution presents two case reports showing positive radiographic signs, which are the hallmarks of a possible close relationship between the inferior alveolar nerve and the third molars. We aim at better defining the relationship between third molars and the mandibular canal using Dental CT Scan, DICOM image acquisition and 3D reconstruction with a dedicated software. By our study we deduce that 3D images are not indispensable, but they can provide a very agreeable assistance in the most complicated cases. PMID:23386934

  4. 3D Multispectral Light Propagation Model For Subcutaneous Veins Imaging

    SciTech Connect

    Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William

    2008-01-01

    In this paper, we describe a new 3D light propagation model aimed at understanding the effects of various physiological properties on subcutaneous vein imaging. In particular, we build upon the well known MCML (Monte Carlo Multi Layer) code and present a tissue model that improves upon the current state-of-the-art by: incorporating physiological variation, such as melanin concentration, fat content, and layer thickness; including veins of varying depth and diameter; using curved surfaces from real arm shapes; and modeling the vessel wall interface. We describe our model, present results from the Monte Carlo modeling, and compare these results with those obtained with other Monte Carlo methods.

  5. Voxel-Based 3-D Tree Modeling from Lidar Images for Extracting Tree Structual Information

    NASA Astrophysics Data System (ADS)

    Hosoi, F.

    2014-12-01

    Recently, lidar (light detection and ranging) has been used to extracting tree structural information. Portable scanning lidar systems can capture the complex shape of individual trees as a 3-D point-cloud image. 3-D tree models reproduced from the lidar-derived 3-D image can be used to estimate tree structural parameters. We have proposed the voxel-based 3-D modeling for extracting tree structural parameters. One of the tree parameters derived from the voxel modeling is leaf area density (LAD). We refer to the method as the voxel-based canopy profiling (VCP) method. In this method, several measurement points surrounding the canopy and optimally inclined laser beams are adopted for full laser beam illumination of whole canopy up to the internal. From obtained lidar image, the 3-D information is reproduced as the voxel attributes in the 3-D voxel array. Based on the voxel attributes, contact frequency of laser beams on leaves is computed and LAD in each horizontal layer is obtained. This method offered accurate LAD estimation for individual trees and woody canopy trees. For more accurate LAD estimation, the voxel model was constructed by combining airborne and portable ground-based lidar data. The profiles obtained by the two types of lidar complemented each other, thus eliminating blind regions and yielding more accurate LAD profiles than could be obtained by using each type of lidar alone. Based on the estimation results, we proposed an index named laser beam coverage index, Ω, which relates to the lidar's laser beam settings and a laser beam attenuation factor. It was shown that this index can be used for adjusting measurement set-up of lidar systems and also used for explaining the LAD estimation error using different types of lidar systems. Moreover, we proposed a method to estimate woody material volume as another application of the voxel tree modeling. In this method, voxel solid model of a target tree was produced from the lidar image, which is composed of

  6. Quantitative 3D Optical Imaging: Applications in Dosimetry and Biophysics

    NASA Astrophysics Data System (ADS)

    Thomas, Andrew Stephen

    Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of ˜60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling ˜10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of

  7. 3-D Imaging and Simulation for Nephron Sparing Surgical Training.

    PubMed

    Ahmadi, Hamed; Liu, Jen-Jane

    2016-08-01

    Minimally invasive partial nephrectomy (MIPN) is now considered the procedure of choice for small renal masses largely based on functional advantages over traditional open surgery. Lack of haptic feedback, the need for spatial understanding of tumor borders, and advanced operative techniques to minimize ischemia time or achieve zero-ischemia PN are among factors that make MIPN a technically demanding operation with a steep learning curve for inexperienced surgeons. Surgical simulation has emerged as a useful training adjunct in residency programs to facilitate the acquisition of these complex operative skills in the setting of restricted work hours and limited operating room time and autonomy. However, the majority of available surgical simulators focus on basic surgical skills, and procedure-specific simulation is needed for optimal surgical training. Advances in 3-dimensional (3-D) imaging have also enhanced the surgeon's ability to localize tumors intraoperatively. This article focuses on recent procedure-specific simulation models for laparoscopic and robotic-assisted PN and advanced 3-D imaging techniques as part of pre- and some cases, intraoperative surgical planning. PMID:27314271

  8. 3D Reconstruction of virtual colon structures from colonoscopy images.

    PubMed

    Hong, DongHo; Tavanapong, Wallapak; Wong, Johnny; Oh, JungHwan; de Groen, Piet C

    2014-01-01

    This paper presents the first fully automated reconstruction technique of 3D virtual colon segments from individual colonoscopy images. It is the basis of new software applications that may offer great benefits for improving quality of care for colonoscopy patients. For example, a 3D map of the areas inspected and uninspected during colonoscopy can be shown on request of the endoscopist during the procedure. The endoscopist may revisit the suggested uninspected areas to reduce the chance of missing polyps that reside in these areas. The percentage of the colon surface seen by the endoscopist can be used as a coarse objective indicator of the quality of the procedure. The derived virtual colon models can be stored for post-procedure training of new endoscopists to teach navigation techniques that result in a higher level of procedure quality. Our technique does not require a prior CT scan of the colon or any global positioning device. Our experiments on endoscopy images of an Olympus synthetic colon model reveal encouraging results with small average reconstruction errors (4.1 mm for the fold depths and 12.1 mm for the fold circumferences). PMID:24225230

  9. Recent progress in 3-D imaging of sea freight containers

    NASA Astrophysics Data System (ADS)

    Fuchs, Theobald; Schön, Tobias; Dittmann, Jonas; Sukowski, Frank; Hanke, Randolf

    2015-03-01

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today's 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  10. Recent progress in 3-D imaging of sea freight containers

    SciTech Connect

    Fuchs, Theobald Schön, Tobias Sukowski, Frank; Dittmann, Jonas; Hanke, Randolf

    2015-03-31

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today’s 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  11. Computing 3D head orientation from a monocular image sequence

    NASA Astrophysics Data System (ADS)

    Horprasert, Thanarat; Yacoob, Yaser; Davis, Larry S.

    1997-02-01

    An approach for estimating 3D head orientation in a monocular image sequence is proposed. The approach employs recently developed image-based parameterized tracking for face and face features to locate the area in which a sub- pixel parameterized shape estimation of the eye's boundary is performed. This involves tracking of five points (four at the eye corners and the fifth is the tip of the nose). We describe an approach that relies on the coarse structure of the face to compute orientation relative to the camera plane. Our approach employs projective invariance of the cross-ratios of the eye corners and anthropometric statistics to estimate the head yaw, roll and pitch. Analytical and experimental results are reported.

  12. 3D electrical tomographic imaging using vertical arrays of electrodes

    NASA Astrophysics Data System (ADS)

    Murphy, S. C.; Stanley, S. J.; Rhodes, D.; York, T. A.

    2006-11-01

    Linear arrays of electrodes in conjunction with electrical impedance tomography have been used to spatially interrogate industrial processes that have only limited access for sensor placement. This paper explores the compromises that are to be expected when using a small number of vertically positioned linear arrays to facilitate 3D imaging using electrical tomography. A configuration with three arrays is found to give reasonable results when compared with a 'conventional' arrangement of circumferential electrodes. A single array yields highly localized sensitivity that struggles to image the whole space. Strategies have been tested on a small-scale version of a sludge settling application that is of relevance to the industrial sponsor. A new electrode excitation strategy, referred to here as 'planar cross drive', is found to give superior results to an extended version of the adjacent electrodes technique due to the improved uniformity of the sensitivity across the domain. Recommendations are suggested for parameters to inform the scale-up to industrial vessels.

  13. Mono- and multistatic polarimetric sparse aperture 3D SAR imaging

    NASA Astrophysics Data System (ADS)

    DeGraaf, Stuart; Twigg, Charles; Phillips, Louis

    2008-04-01

    SAR imaging at low center frequencies (UHF and L-band) offers advantages over imaging at more conventional (X-band) frequencies, including foliage penetration for target detection and scene segmentation based on polarimetric coherency. However, bandwidths typically available at these center frequencies are small, affording poor resolution. By exploiting extreme spatial diversity (partial hemispheric k-space coverage) and nonlinear bandwidth extrapolation/interpolation methods such as Least-Squares SuperResolution (LSSR) and Least-Squares CLEAN (LSCLEAN), one can achieve resolutions that are commensurate with the carrier frequency (λ/4) rather than the bandwidth (c/2B). Furthermore, extreme angle diversity affords complete coverage of a target's backscatter, and a correspondingly more literal image. To realize these benefits, however, one must image the scene in 3-D; otherwise layover-induced misregistration compromises the coherent summation that yields improved resolution. Practically, one is limited to very sparse elevation apertures, i.e. a small number of circular passes. Here we demonstrate that both LSSR and LSCLEAN can reduce considerably the sidelobe and alias artifacts caused by these sparse elevation apertures. Further, we illustrate how a hypothetical multi-static geometry consisting of six vertical real-aperture receive apertures, combined with a single circular transmit aperture provide effective, though sparse and unusual, 3-D k-space support. Forward scattering captured by this geometry reveals horizontal scattering surfaces that are missed in monostatic backscattering geometries. This paper illustrates results based on LucernHammer UHF and L-band mono- and multi-static simulations of a backhoe.

  14. Active and interactive floating image display using holographic 3D images

    NASA Astrophysics Data System (ADS)

    Morii, Tsutomu; Sakamoto, Kunio

    2006-08-01

    We developed a prototype tabletop holographic display system. This system consists of the object recognition system and the spatial imaging system. In this paper, we describe the recognition system using an RFID tag and the 3D display system using a holographic technology. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1,2,3. The purpose of this paper is to propose the interactive system using these 3D imaging technologies. In this paper, the authors describe the interactive tabletop 3D display system. The observer can view virtual images when the user puts the special object on the display table. The key technologies of this system are the object recognition system and the spatial imaging display.

  15. High Resolution 3D Radar Imaging of Comet Interiors

    NASA Astrophysics Data System (ADS)

    Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

    2012-12-01

    Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D

  16. Object Segmentation and Ground Truth in 3D Embryonic Imaging

    PubMed Central

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  17. 3D in vivo optical skin imaging for intense pulsed light and fractional ablative resurfacing of photodamaged skin.

    PubMed

    Clementoni, Matteo Tretti; Lavagno, Rosalia; Catenacci, Maximilian; Kantor, Roman; Mariotto, Guido; Shvets, Igor

    2011-11-01

    The authors present a 3D in vivo imaging system used to assess the effectiveness of IPL and fractional laser treatments of photodamaged skin. Preoperative and postoperative images of patients treated with these procedures are analyzed and demonstrate the superior ability of this 3D technology to reveal decrease in vascularity and improvement in melanin distribution and calculate the degree of individual deep wrinkles before and after treatment. PMID:22004864

  18. Vector Acoustics, Vector Sensors, and 3D Underwater Imaging

    NASA Astrophysics Data System (ADS)

    Lindwall, D.

    2007-12-01

    Vector acoustic data has two more dimensions of information than pressure data and may allow for 3D underwater imaging with much less data than with hydrophone data. The vector acoustic sensors measures the particle motions due to passing sound waves and, in conjunction with a collocated hydrophone, the direction of travel of the sound waves. When using a controlled source with known source and sensor locations, the reflection points of the sound field can be determined with a simple trigonometric calculation. I demonstrate this concept with an experiment that used an accelerometer based vector acoustic sensor in a water tank with a short-pulse source and passive scattering targets. The sensor consists of a three-axis accelerometer and a matched hydrophone. The sound source was a standard transducer driven by a short 7 kHz pulse. The sensor was suspended in a fixed location and the hydrophone was moved about the tank by a robotic arm to insonify the tank from many locations. Several floats were placed in the tank as acoustic targets at diagonal ranges of approximately one meter. The accelerometer data show the direct source wave as well as the target scattered waves and reflections from the nearby water surface, tank bottom and sides. Without resorting to the usual methods of seismic imaging, which in this case is only two dimensional and relied entirely on the use of a synthetic source aperture, the two targets, the tank walls, the tank bottom, and the water surface were imaged. A directional ambiguity inherent to vector sensors is removed by using collocated hydrophone data. Although this experiment was in a very simple environment, it suggests that 3-D seismic surveys may be achieved with vector sensors using the same logistics as a 2-D survey that uses conventional hydrophones. This work was supported by the Office of Naval Research, program element 61153N.

  19. A new algorithm of laser 3D visualization based on space-slice

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Song, Yanfeng; Song, Yong; Cao, Jie; Hao, Qun

    2013-12-01

    Traditional visualization algorithms based on three-dimensional (3D) laser point cloud data consist of two steps: stripe point cloud data into different target objects and establish the 3D surface models of the target objects to realize visualization using interpolation point or surface fitting method. However, some disadvantages, such as low efficiency, loss of image details, exist in most of these algorithms. In order to cope with these problems, a 3D visualization algorithm based on space-slice is proposed in this paper, which includes two steps: data classification and image reconstruction. In the first step, edge detection method is used to check the parametric continuity and extract edges to classify data into different target regions preliminarily. In the second stage, the divided data is split further into space-slice according to coordinates. Based on space-slice of the point cloud data, one-dimensional interpolation methods is adopted to get the curves connected by each group of cloud point data smoother. In the end, these interpolation points obtained from each group are made by the use of getting the fitting surface. As expected, visual morphology of the objects is obtained. The simulation experiment results compared with real scenes show that the final visual images have explicit details and the overall visual result is natural.

  20. Research of Fast 3D Imaging Based on Multiple Mode

    NASA Astrophysics Data System (ADS)

    Chen, Shibing; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda; Wang, Yu

    2016-02-01

    Three-dimensional (3D) imaging has received increasingly extensive attention and has been widely used currently. Lots of efforts have been put on three-dimensional imaging method and system study, in order to meet fast and high accurate requirement. In this article, we realize a fast and high quality stereo matching algorithm on field programmable gate array (FPGA) using the combination of time-of-flight (TOF) camera and binocular camera. Images captured from the two cameras own a same spatial resolution, letting us use the depth maps taken by the TOF camera to figure initial disparity. Under the constraint of the depth map as the stereo pairs when comes to stereo matching, expected disparity of each pixel is limited within a narrow search range. In the meanwhile, using field programmable gate array (FPGA, altera cyclone IV series) concurrent computing we can configure multi core image matching system, thus doing stereo matching on embedded system. The simulation results demonstrate that it can speed up the process of stereo matching and increase matching reliability and stability, realize embedded calculation, expand application range.

  1. Brain surface maps from 3-D medical images

    NASA Astrophysics Data System (ADS)

    Lu, Jiuhuai; Hansen, Eric W.; Gazzaniga, Michael S.

    1991-06-01

    The anatomic and functional localization of brain lesions for neurologic diagnosis and brain surgery is facilitated by labeling the cortical surface in 3D images. This paper presents a method which extracts cortical contours from magnetic resonance (MR) image series and then produces a planar surface map which preserves important anatomic features. The resultant map may be used for manual anatomic localization as well as for further automatic labeling. Outer contours are determined on MR cross-sectional images by following the clear boundaries between gray matter and cerebral-spinal fluid, skipping over sulci. Carrying this contour below the surface by shrinking it along its normal produces an inner contour that alternately intercepts gray matter (sulci) and white matter along its length. This procedure is applied to every section in the set, and the image (grayscale) values along the inner contours are radially projected and interpolated onto a semi-cylindrical surface with axis normal to the slices and large enough to cover the whole brain. A planar map of the cortical surface results by flattening this cylindrical surface. The projection from inner contour to cylindrical surface is unique in the sense that different points on the inner contour correspond to different points on the cylindrical surface. As the outer contours are readily obtained by automatic segmentation, cortical maps can be made directly from an MR series.

  2. Fast 3D subsurface imaging with stepped-frequency GPR

    NASA Astrophysics Data System (ADS)

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Sutter, Lena

    2015-05-01

    This paper investigates an algorithm for forming 3D images of the subsurface using stepped-frequency GPR data. The algorithm is specifically designed for a handheld GPR and therefore accounts for the irregular sampling pattern in the data and the spatially-variant air-ground interface by estimating an effective "ground-plane" and then registering the data to the plane. The algorithm efficiently solves the 4th-order polynomial for the Snell reflection points using a fully vectorized iterative scheme. The forward operator is implemented efficiently using an accelerated nonuniform FFT (Greengard and Lee, 2004); the adjoint operator is implemented efficiently using an interpolation step coupled with an upsampled FFT. The imaging is done as a linearized version of the full inverse problem, which is regularized using a sparsity constraint to reduce sidelobes and therefore improve image localization. Applying an appropriate sparsity constraint, the algorithm is able to eliminate most the surrounding clutter and sidelobes, while still rendering valuable image properties such as shape and size. The algorithm is applied to simulated data, controlled experimental data (made available by Dr. Waymond Scott, Georgia Institute of Technology), and government-provided data with irregular sampling and air-ground interface.

  3. Image appraisal for 2D and 3D electromagnetic inversion

    SciTech Connect

    Alumbaugh, D.L.; Newman, G.A.

    1998-04-01

    Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and model covariance matrices can be directly calculated. The columns of the model resolution matrix are shown to yield empirical estimates of the horizontal and vertical resolution throughout the imaging region. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how the estimated data noise maps into parameter error. When the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion), an iterative method can be applied to statistically estimate the model covariance matrix, as well as a regularization covariance matrix. The latter estimates the error in the inverted results caused by small variations in the regularization parameter. A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on a synthetic cross well EM data set.

  4. 3D geometric analysis of the aorta in 3D MRA follow-up pediatric image data

    NASA Astrophysics Data System (ADS)

    Wörz, Stefan; Alrajab, Abdulsattar; Arnold, Raoul; Eichhorn, Joachim; von Tengg-Kobligk, Hendrik; Schenk, Jens-Peter; Rohr, Karl

    2014-03-01

    We introduce a new model-based approach for the segmentation of the thoracic aorta and its main branches from follow-up pediatric 3D MRA image data. For robust segmentation of vessels even in difficult cases (e.g., neighboring structures), we propose a new extended parametric cylinder model which requires only relatively few model parameters. The new model is used in conjunction with a two-step fitting scheme for refining the segmentation result yielding an accurate segmentation of the vascular shape. Moreover, we include a novel adaptive background masking scheme and we describe a spatial normalization scheme to align the segmentation results from follow-up examinations. We have evaluated our proposed approach using different 3D synthetic images and we have successfully applied the approach to follow-up pediatric 3D MRA image data.

  5. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    SciTech Connect

    Wang, J.; Karunakaran, C.; Lu, Y.; Hormes, J.; Hitchcock, A. P.; Prange, A.; Franz, B.; Harkness, T.; Obst, M.

    2011-09-09

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  6. Verification of 3d Building Models Using Mutual Information in Airborne Oblique Images

    NASA Astrophysics Data System (ADS)

    Nyaruhuma, A. P.; Gerke, M.; Vosselman, G.

    2012-07-01

    This paper describes a method for automatic verification of 3D building models using airborne oblique images. The problem being tackled is identifying buildings that are demolished or changed since the models were constructed or identifying wrong models using the images. The models verified are of CityGML LOD2 or higher since their edges are expected to coincide with actual building edges. The verification approach is based on information theory. Corresponding variables between building models and oblique images are used for deriving mutual information for individual edges, faces or whole buildings, and combined for all perspective images available for the building. The wireframe model edges are projected to images and verified using low level image features - the image pixel gradient directions. A building part is only checked against images in which it may be visible. The method has been tested with models constructed using laser points against Pictometry images that are available for most cities of Europe and may be publically viewed in the so called Birds Eye view of the Microsoft Bing Maps. Results are that nearly all buildings are correctly categorised as existing or demolished. Because we now concentrate only on roofs we also used the method to test and compare results from nadir images. This comparison made clear that especially height errors in models can be more reliably detected in oblique images because of the tilted view. Besides overall building verification, results per individual edges can be used for improving the 3D building models.

  7. 3D imaging and characterization of microlenses and microlens arrays using nonlinear microscopy

    NASA Astrophysics Data System (ADS)

    Krmpot, Aleksandar J.; Tserevelakis, George J.; Murić, Branka D.; Filippidis, George; Pantelić, Dejan V.

    2013-05-01

    In this work, nonlinear laser scanning microscopy was employed for the characterization and three-dimensional (3D) imaging of microlenses and microlens arrays. Third-harmonic generation and two-photon excitation fluorescence (TPEF) signals were recorded and the obtained data were further processed in order to generate 3D reconstructions of the examined samples. Femtosecond laser pulses (1028 nm) were utilized for excitation. Microlenses were manufactured on Tot'hema and eosin sensitized gelatin layers using a green (532 nm) continuous wave laser beam using the direct laser writing method. The profiles of the microlens surface were obtained from the radial cross-sections, using a triple-Gaussian fit. The analytical shapes of the profiles were also used for ray tracing. Furthermore, the volumes of the microlenses were determined with high precision. The TPEF signal arising from the volume of the material was recorded and the respective 3D spatial fluorescence distribution of the samples was mapped. Nonlinear microscopy modalities have been shown to be a powerful diagnostic tool for microlens characterization as they enable in-depth investigations of the structural properties of the samples, in a nondestructive manner.

  8. Single-stage application of a novel decellularized dermis for treatment-resistant lower limb ulcers: positive outcomes assessed by SIAscopy, laser perfusion, and 3D imaging, with sequential timed histological analysis.

    PubMed

    Greaves, Nicholas S; Benatar, Brian; Baguneid, Mohamed; Bayat, Ardeshir

    2013-01-01

    We present results of an original clinical study investigating efficacy of a decellularized dermal skin substitute (DCD) as part of a one-stage therapeutic strategy for recalcitrant leg ulcers. Twenty patients with treatment-resistant ulcers underwent hydrosurgical debridement, after which DCD was applied and covered with negative pressure dressings for 1 week. Participants were reviewed on seven occasions over 6 months. 3D photography, full-field laser perfusion imaging, spectrophotometric intracutaneous analysis, and sequential biopsies were used to monitor healing. Mean ulcer duration and surface area prior to DCD placement were 4.76 years (range 0.25-40 years) and 13.11 cm(2) (range 1.06-40.75 cm(2)), respectively. Seventy percent of ulcers were venous. Surface area decreased in all patients after treatment (range 23-100%). Mean reduction was 87% after 6 months, and 60% of patients healed completely. Wound bed hemoglobin flux increased significantly 6 weeks after treatment (p = 0.005). Histological and immunohistochemical analysis confirmed progressive DCD integration with colonization by host fibroblasts, lymphocytes, and neutrophils, resulting in fibroplasia, reepithelialisation, and angiogenesis, with correlating raised CD31, collagen I, and collagen III levels. Subgroup analysis showed differing cellular behavior depending on wound duration, with delayed angiogenesis, reduced collagen deposition, and smaller reductions in surface area in ulcers present for over 1 year. The stain intensities of immunohistochemical markers including fibronectin, collagen, and CD31 differed depending on depth from the wound surface and presence of intact epithelium. DCD safely produced significant improvement in treatment-resistant leg ulcers. With no requirement for hospital admission, anesthetic, or autogenic skin grafting, this treatment could be administered in hospital and community settings. PMID:24134424

  9. Laser 3-D measuring system and real-time visual feedback for teaching and correcting breathing

    NASA Astrophysics Data System (ADS)

    Povšič, Klemen; Fležar, Matjaž; Možina, Janez; Jezeršek, Matija

    2012-03-01

    We present a novel method for real-time 3-D body-shape measurement during breathing based on the laser multiple-line triangulation principle. The laser projector illuminates the measured surface with a pattern of 33 equally inclined light planes. Simultaneously, the camera records the distorted light pattern from a different viewpoint. The acquired images are transferred to a personal computer, where the 3-D surface reconstruction, shape analysis, and display are performed in real time. The measured surface displacements are displayed with a color palette, which enables visual feedback to the patient while breathing is being taught. The measuring range is approximately 400×600×500 mm in width, height, and depth, respectively, and the accuracy of the calibrated apparatus is +/-0.7 mm. The system was evaluated by means of its capability to distinguish between different breathing patterns. The accuracy of the measured volumes of chest-wall deformation during breathing was verified using standard methods of volume measurements. The results show that the presented 3-D measuring system with visual feedback has great potential as a diagnostic and training assistance tool when monitoring and evaluating the breathing pattern, because it offers a simple and effective method of graphical communication with the patient.

  10. 3D and multispectral imaging for subcutaneous veins detection.

    PubMed

    Paquit, Vincent C; Tobin, Kenneth W; Price, Jeffery R; Mèriaudeau, Fabrice

    2009-07-01

    The first and perhaps most important phase of a surgical procedure is the insertion of an intravenous (IV) catheter. Currently, this is performed manually by trained personnel. In some visions of future operating rooms, however, this process is to be replaced by an automated system. Experiments to determine the best NIR wavelengths to optimize vein contrast for physiological differences such as skin tone and/or the presence of hair on the arm or wrist surface are presented. For illumination our system is composed of a mercury arc lamp coupled to a 10nm band-pass spectrometer. A structured lighting system is also coupled to our multispectral system in order to provide 3D information of the patient arm orientation. Images of each patient arm are captured under every possible combinations of illuminants and the optimal combination of wavelengths for a given subject to maximize vein contrast using linear discriminant analysis is determined. PMID:19582050

  11. An Efficient 3D Imaging using Structured Light Systems

    NASA Astrophysics Data System (ADS)

    Lee, Deokwoo

    Structured light 3D surface imaging has been crucial in the fields of image processing and computer vision, particularly in reconstruction, recognition and others. In this dissertation, we propose the approaches to development of an efficient 3D surface imaging system using structured light patterns including reconstruction, recognition and sampling criterion. To achieve an efficient reconstruction system, we address the problem in its many dimensions. In the first, we extract geometric 3D coordinates of an object which is illuminated by a set of concentric circular patterns and reflected to a 2D image plane. The relationship between the original and the deformed shape of the light patterns due to a surface shape provides sufficient 3D coordinates information. In the second, we consider system efficiency. The efficiency, which can be quantified by the size of data, is improved by reducing the number of circular patterns to be projected onto an object of interest. Akin to the Shannon-Nyquist Sampling Theorem, we derive the minimum number of circular patterns which sufficiently represents the target object with no considerable information loss. Specific geometric information (e.g. the highest curvature) of an object is key to deriving the minimum sampling density. In the third, the object, represented using the minimum number of patterns, has incomplete color information (i.e. color information is given a priori along with the curves). An interpolation is carried out to complete the photometric reconstruction. The results can be approximately reconstructed because the minimum number of the patterns may not exactly reconstruct the original object. But the result does not show considerable information loss, and the performance of an approximate reconstruction is evaluated by performing recognition or classification. In an object recognition, we use facial curves which are deformed circular curves (patterns) on a target object. We simply carry out comparison between the

  12. 3D imaging of semiconductor components by discrete laminography

    SciTech Connect

    Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  13. 3D imaging of semiconductor components by discrete laminography

    NASA Astrophysics Data System (ADS)

    Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

    2014-06-01

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  14. Bore-Sight Calibration of Multiple Laser Range Finders for Kinematic 3D Laser Scanning Systems

    PubMed Central

    Jung, Jaehoon; Kim, Jeonghyun; Yoon, Sanghyun; Kim, Sangmin; Cho, Hyoungsig; Kim, Changjae; Heo, Joon

    2015-01-01

    The Simultaneous Localization and Mapping (SLAM) technique has been used for autonomous navigation of mobile systems; now, its applications have been extended to 3D data acquisition of indoor environments. In order to reconstruct 3D scenes of indoor space, the kinematic 3D laser scanning system, developed herein, carries three laser range finders (LRFs): one is mounted horizontally for system-position correction and the other two are mounted vertically to collect 3D point-cloud data of the surrounding environment along the system’s trajectory. However, the kinematic laser scanning results can be impaired by errors resulting from sensor misalignment. In the present study, the bore-sight calibration of multiple LRF sensors was performed using a specially designed double-deck calibration facility, which is composed of two half-circle-shaped aluminum frames. Moreover, in order to automatically achieve point-to-point correspondences between a scan point and the target center, a V-shaped target was designed as well. The bore-sight calibration parameters were estimated by a constrained least squares method, which iteratively minimizes the weighted sum of squares of residuals while constraining some highly-correlated parameters. The calibration performance was analyzed by means of a correlation matrix. After calibration, the visual inspection of mapped data and residual calculation confirmed the effectiveness of the proposed calibration approach. PMID:25946627

  15. Bore-Sight Calibration of Multiple Laser Range Finders for Kinematic 3D Laser Scanning Systems.

    PubMed

    Jung, Jaehoon; Kim, Jeonghyun; Yoon, Sanghyun; Kim, Sangmin; Cho, Hyoungsig; Kim, Changjae; Heo, Joon

    2015-01-01

    The Simultaneous Localization and Mapping (SLAM) technique has been used for autonomous navigation of mobile systems; now, its applications have been extended to 3D data acquisition of indoor environments. In order to reconstruct 3D scenes of indoor space, the kinematic 3D laser scanning system, developed herein, carries three laser range finders (LRFs): one is mounted horizontally for system-position correction and the other two are mounted vertically to collect 3D point-cloud data of the surrounding environment along the system's trajectory. However, the kinematic laser scanning results can be impaired by errors resulting from sensor misalignment. In the present study, the bore-sight calibration of multiple LRF sensors was performed using a specially designed double-deck calibration facility, which is composed of two half-circle-shaped aluminum frames. Moreover, in order to automatically achieve point-to-point correspondences between a scan point and the target center, a V-shaped target was designed as well. The bore-sight calibration parameters were estimated by a constrained least squares method, which iteratively minimizes the weighted sum of squares of residuals while constraining some highly-correlated parameters. The calibration performance was analyzed by means of a correlation matrix. After calibration, the visual inspection of mapped data and residual calculation confirmed the effectiveness of the proposed calibration approach. PMID:25946627

  16. Needle placement for piriformis injection using 3-D imaging.

    PubMed

    Clendenen, Steven R; Candler, Shawn A; Osborne, Michael D; Palmer, Scott C; Duench, Stephanie; Glynn, Laura; Ghazi, Salim M

    2013-01-01

    Piriformis syndrome is a pain syndrome originating in the buttock and is attributed to 6% - 8% of patients referred for the treatment of back and leg pain. The treatment for piriformis syndrome using fluoroscopy, computed tomography (CT), electromyography (EMG), and ultrasound (US) has become standard practice. The treatment of Piriformis Syndrome has evolved to include fluoroscopy and EMG with CT guidance. We present a case study of 5 successful piriformis injections using 3-D computer-assisted electromagnet needle tracking coupled with ultrasound. A 6-degree of freedom electromagnetic position tracker was attached to the ultrasound probe that allowed the system to detect the position and orientation of the probe in the magnetic field. The tracked ultrasound probe was used to find the posterior superior iliac spine. Subsequently, 3 points were captured to register the ultrasound image with the CT or magnetic resonance image scan. Moreover, after the registration was obtained, the navigation system visualized the tracked needle relative to the CT scan in real-time using 2 orthogonal multi-planar reconstructions centered at the tracked needle tip. Conversely, a recent study revealed that fluoroscopically guided injections had 30% accuracy compared to ultrasound guided injections, which tripled the accuracy percentage. This novel technique exhibited an accurate needle guidance injection precision of 98% while advancing to the piriformis muscle and avoiding the sciatic nerve. The mean (± SD) procedure time was 19.08 (± 4.9) minutes. This technique allows for electromagnetic instrument tip tracking with real-time 3-D guidance to the selected target. As with any new technique, a learning curve is expected; however, this technique could offer an alternative, minimizing radiation exposure. PMID:23703429

  17. GPU-accelerated denoising of 3D magnetic resonance images

    SciTech Connect

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  18. Spectral ladar: towards active 3D multispectral imaging

    NASA Astrophysics Data System (ADS)

    Powers, Michael A.; Davis, Christopher C.

    2010-04-01

    In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.

  19. 3D segmentation of prostate ultrasound images using wavelet transform

    NASA Astrophysics Data System (ADS)

    Akbari, Hamed; Yang, Xiaofeng; Halig, Luma V.; Fei, Baowei

    2011-03-01

    The current definitive diagnosis of prostate cancer is transrectal ultrasound (TRUS) guided biopsy. However, the current procedure is limited by using 2D biopsy tools to target 3D biopsy locations. This paper presents a new method for automatic segmentation of the prostate in three-dimensional transrectal ultrasound images, by extracting texture features and by statistically matching geometrical shape of the prostate. A set of Wavelet-based support vector machines (WSVMs) are located and trained at different regions of the prostate surface. The WSVMs capture texture priors of ultrasound images for classification of the prostate and non-prostate tissues in different zones around the prostate boundary. In the segmentation procedure, these W-SVMs are trained in three sagittal, coronal, and transverse planes. The pre-trained W-SVMs are employed to tentatively label each voxel around the surface of the model as a prostate or non-prostate voxel by the texture matching. The labeled voxels in three planes after post-processing is overlaid on a prostate probability model. The probability prostate model is created using 10 segmented prostate data. Consequently, each voxel has four labels: sagittal, coronal, and transverse planes and one probability label. By defining a weight function for each labeling in each region, each voxel is labeled as a prostate or non-prostate voxel. Experimental results by using real patient data show the good performance of the proposed model in segmenting the prostate from ultrasound images.

  20. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  1. Statistical properties of polarization image and despeckling method by multiresolution block-matching 3D filter

    NASA Astrophysics Data System (ADS)

    Wen, D. H.; Jiang, Y. S.; Zhang, Y. Z.; Gao, Q.

    2014-03-01

    The theoretical and experimental investigations on the polarization imagery system of speckle statistical characteristics and speckle removing method are researched. A method to obtain two images encoded by polarization degree with a single measurement process is proposed. A theoretical model for polarization imagery system on Müller matrix is proposed. According to modern charge coupled device (CCD) imaging characteristics, speckles are divided into two kinds, namely small speckle and big speckle. Based on this model, a speckle reduction algorithm based on a dual-tree complex wavelet transform (DTCWT) and blockmatching 3D filter (BM3D) is proposed (DTBM3D). Original laser image data transformed by logarithmic compression is decomposed by DTCWT into approximation and detail subbands. Bilateral filtering is applied to the approximation subbands, and a suited BM3D filter is applied to the detail subbands. The despeckling results show that contrast improvement index and edge preserve index outperform those of traditional methods. The researches have important reference value in research of speckle noise level and removing speckle noise.

  2. Performance assessment of 3D surface imaging technique for medical imaging applications

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

  3. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network

    PubMed Central

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

    2012-01-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future

  4. 3D Slicer as an image computing platform for the Quantitative Imaging Network.

    PubMed

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V; Pieper, Steve; Kikinis, Ron

    2012-11-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open-source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future

  5. SU-E-T-154: Establishment and Implement of 3D Image Guided Brachytherapy Planning System

    SciTech Connect

    Jiang, S; Zhao, S; Chen, Y; Li, Z; Li, P; Huang, Z; Yang, Z; Zhang, X

    2014-06-01

    Purpose: Cannot observe the dose intuitionally is a limitation of the existing 2D pre-implantation dose planning. Meanwhile, a navigation module is essential to improve the accuracy and efficiency of the implantation. Hence a 3D Image Guided Brachytherapy Planning System conducting dose planning and intra-operative navigation based on 3D multi-organs reconstruction is developed. Methods: Multi-organs including the tumor are reconstructed in one sweep of all the segmented images using the multiorgans reconstruction method. The reconstructed organs group establishs a three-dimensional visualized operative environment. The 3D dose maps of the three-dimentional conformal localized dose planning are calculated with Monte Carlo method while the corresponding isodose lines and isodose surfaces are displayed in a stereo view. The real-time intra-operative navigation is based on an electromagnetic tracking system (ETS) and the fusion between MRI and ultrasound images. Applying Least Square Method, the coordinate registration between 3D models and patient is realized by the ETS which is calibrated by a laser tracker. The system is validated by working on eight patients with prostate cancer. The navigation has passed the precision measurement in the laboratory. Results: The traditional marching cubes (MC) method reconstructs one organ at one time and assembles them together. Compared to MC, presented multi-organs reconstruction method has superiorities in reserving the integrality and connectivity of reconstructed organs. The 3D conformal localized dose planning, realizing the 'exfoliation display' of different isodose surfaces, helps make sure the dose distribution has encompassed the nidus and avoid the injury of healthy tissues. During the navigation, surgeons could observe the coordinate of instruments real-timely employing the ETS. After the calibration, accuracy error of the needle position is less than 2.5mm according to the experiments. Conclusion: The speed and

  6. 3D Seismic Imaging over a Potential Collapse Structure

    NASA Astrophysics Data System (ADS)

    Gritto, Roland; O'Connell, Daniel; Elobaid Elnaiem, Ali; Mohamed, Fathelrahman; Sadooni, Fadhil

    2016-04-01

    The Middle-East has seen a recent boom in construction including the planning and development of complete new sub-sections of metropolitan areas. Before planning and construction can commence, however, the development areas need to be investigated to determine their suitability for the planned project. Subsurface parameters such as the type of material (soil/rock), thickness of top soil or rock layers, depth and elastic parameters of basement, for example, comprise important information needed before a decision concerning the suitability of the site for construction can be made. A similar problem arises in environmental impact studies, when subsurface parameters are needed to assess the geological heterogeneity of the subsurface. Environmental impact studies are typically required for each construction project, particularly for the scale of the aforementioned building boom in the Middle East. The current study was conducted in Qatar at the location of a future highway interchange to evaluate a suite of 3D seismic techniques in their effectiveness to interrogate the subsurface for the presence of karst-like collapse structures. The survey comprised an area of approximately 10,000 m2 and consisted of 550 source- and 192 receiver locations. The seismic source was an accelerated weight drop while the geophones consisted of 3-component 10 Hz velocity sensors. At present, we analyzed over 100,000 P-wave phase arrivals and performed high-resolution 3-D tomographic imaging of the shallow subsurface. Furthermore, dispersion analysis of recorded surface waves will be performed to obtain S-wave velocity profiles of the subsurface. Both results, in conjunction with density estimates, will be utilized to determine the elastic moduli of the subsurface rock layers.

  7. 3D imaging of enzymes working in situ.

    PubMed

    Jamme, F; Bourquin, D; Tawil, G; Viksø-Nielsen, A; Buléon, A; Réfrégiers, M

    2014-06-01

    Today, development of slowly digestible food with positive health impact and production of biofuels is a matter of intense research. The latter is achieved via enzymatic hydrolysis of starch or biomass such as lignocellulose. Free label imaging, using UV autofluorescence, provides a great tool to follow one single enzyme when acting on a non-UV-fluorescent substrate. In this article, we report synchrotron DUV fluorescence in 3-dimensional imaging to visualize in situ the diffusion of enzymes on solid substrate. The degradation pathway of single starch granules by two amylases optimized for biofuel production and industrial starch hydrolysis was followed by tryptophan autofluorescence (excitation at 280 nm, emission filter at 350 nm). The new setup has been specially designed and developed for a 3D representation of the enzyme-substrate interaction during hydrolysis. Thus, this tool is particularly effective for improving knowledge and understanding of enzymatic hydrolysis of solid substrates such as starch and lignocellulosic biomass. It could open up the way to new routes in the field of green chemistry and sustainable development, that is, in biotechnology, biorefining, or biofuels. PMID:24796213

  8. Use of Low-cost 3-D Images in Teaching Gross Anatomy.

    ERIC Educational Resources Information Center

    Richards, Boyd F.; And Others

    1987-01-01

    With advances in computer technology, it has become possible to create three-dimensional (3-D) images of anatomical structures for use in teaching gross anatomy. Reported is a survey of attitudes of 91 first-year medical students toward the use of 3-D images in their anatomy course. Reactions to the 3-D images and suggestions for improvement are…

  9. The image adaptive method for solder paste 3D measurement system

    NASA Astrophysics Data System (ADS)

    Xiaohui, Li; Changku, Sun; Peng, Wang

    2015-03-01

    The extensive application of Surface Mount Technology (SMT) requires various measurement methods to evaluate the circuit board. The solder paste 3D measurement system utilizing laser light projecting on the printed circuit board (PCB) surface is one of the critical methods. The local oversaturation, arising from the non-consistent reflectivity of the PCB surface, will lead to inaccurate measurement. The paper reports a novel optical image adaptive method of remedying the local oversaturation for solder paste measurement. The liquid crystal on silicon (LCoS) and image sensor (CCD or CMOS) are combined as the high dynamic range image (HDRI) acquisition system. The significant characteristic of the new method is that the image after adjustment is captured by specially designed HDRI acquisition system programmed by the LCoS mask. The formation of the LCoS mask, depending on a HDRI combined with the image fusion algorithm, is based on separating the laser light from the local oversaturated region. Experimental results demonstrate that the method can significantly improve the accuracy for the solder paste 3D measurement system with local oversaturation.

  10. Real Time Quantitative 3-D Imaging of Diffusion Flame Species

    NASA Technical Reports Server (NTRS)

    Kane, Daniel J.; Silver, Joel A.

    1997-01-01

    A low-gravity environment, in space or ground-based facilities such as drop towers, provides a unique setting for study of combustion mechanisms. Understanding the physical phenomena controlling the ignition and spread of flames in microgravity has importance for space safety as well as better characterization of dynamical and chemical combustion processes which are normally masked by buoyancy and other gravity-related effects. Even the use of so-called 'limiting cases' or the construction of 1-D or 2-D models and experiments fail to make the analysis of combustion simultaneously simple and accurate. Ideally, to bridge the gap between chemistry and fluid mechanics in microgravity combustion, species concentrations and temperature profiles are needed throughout the flame. However, restrictions associated with performing measurements in reduced gravity, especially size and weight considerations, have generally limited microgravity combustion studies to the capture of flame emissions on film or video laser Schlieren imaging and (intrusive) temperature measurements using thermocouples. Given the development of detailed theoretical models, more sophisticated studies are needed to provide the kind of quantitative data necessary to characterize the properties of microgravity combustion processes as well as provide accurate feedback to improve the predictive capabilities of the computational models. While there have been a myriad of fluid mechanical visualization studies in microgravity combustion, little experimental work has been completed to obtain reactant and product concentrations within a microgravity flame. This is largely due to the fact that traditional sampling methods (quenching microprobes using GC and/or mass spec analysis) are too heavy, slow, and cumbersome for microgravity experiments. Non-intrusive optical spectroscopic techniques have - up until now - also required excessively bulky, power hungry equipment. However, with the advent of near-IR diode

  11. Improvements of 3-D image quality in integral display by reducing distortion errors

    NASA Astrophysics Data System (ADS)

    Kawakita, Masahiro; Sasaki, Hisayuki; Arai, Jun; Okano, Fumio; Suehiro, Koya; Haino, Yasuyuki; Yoshimura, Makoto; Sato, Masahito

    2008-02-01

    An integral three-dimensional (3-D) system based on the principle of integral photography can display natural 3-D images. We studied ways of improving the resolution and viewing angle of 3-D images by using extremely highresolution (EHR) video in an integral 3-D video system. One of the problems with the EHR projection-type integral 3-D system is that positional errors appear between the elemental image and the elemental lens when there is geometric distortion in the projected image. We analyzed the relationships between the geometric distortion in the elemental images caused by the projection lens and the spatial distortion of the reconstructed 3-D image. As a result, we clarified that 3-D images reconstructed far from the lens array were greatly affected by the distortion of the elemental images, and that the 3-D images were significantly distorted in the depth direction at the corners of the displayed images. Moreover, we developed a video signal processor that electrically compensated the distortion in the elemental images for an EHR projection-type integral 3-D system. Therefore, the distortion in the displayed 3-D image was removed, and the viewing angle of the 3-D image was expanded to nearly double that obtained with the previous prototype system.

  12. Comparison of 3D representations depicting micro folds: overlapping imagery vs. time-of-flight laser scanner

    NASA Astrophysics Data System (ADS)

    Vaiopoulos, Aristidis D.; Georgopoulos, Andreas; Lozios, Stylianos G.

    2012-10-01

    A relatively new field of interest, which continuously gains grounds nowadays, is digital 3D modeling. However, the methodologies, the accuracy and the time and effort required to produce a high quality 3D model have been changing drastically the last few years. Whereas in the early days of digital 3D modeling, 3D models were only accessible to computer experts in animation, working many hours in expensive sophisticated software, today 3D modeling has become reasonably fast and convenient. On top of that, with online 3D modeling software, such as 123D Catch, nearly everyone can produce 3D models with minimum effort and at no cost. The only requirement is panoramic overlapping images, of the (still) objects the user wishes to model. This approach however, has limitations in the accuracy of the model. An objective of the study is to examine these limitations by assessing the accuracy of this 3D modeling methodology, with a Terrestrial Laser Scanner (TLS). Therefore, the scope of this study is to present and compare 3D models, produced with two different methods: 1) Traditional TLS method with the instrument ScanStation 2 by Leica and 2) Panoramic overlapping images obtained with DSLR camera and processed with 123D Catch free software. The main objective of the study is to evaluate advantages and disadvantages of the two 3D model producing methodologies. The area represented with the 3D models, features multi-scale folding in a cipollino marble formation. The most interesting part and most challenging to capture accurately, is an outcrop which includes vertically orientated micro folds. These micro folds have dimensions of a few centimeters while a relatively strong relief is evident between them (perhaps due to different material composition). The area of interest is located in Mt. Hymittos, Greece.

  13. Transmission of holographic 3D images using infrared transmitter(II): on a study of transmission of holographic 3D images using infrared transmitter safe to medical equipment

    NASA Astrophysics Data System (ADS)

    Takano, Kunihiko; Muto, Kenji; Tian, Lan; Sato, Koki

    2007-09-01

    An infrared transmitting technique for 3D holographic images is studied. It seems to be very effective as a transmitting technique for 3D holographic images in the places where electric wave is prohibited to be used for transmission. In this paper, we first explain our infrared transmitting system for holograms and a display system for the presentation of holographic 3D images reconstructed from the received signal. Next, we make a report on the results obtained by infrared transmission of CGH and a comparison of the real and the reconstructed 3D images in our system. As this result, it is found that reconstructed holographic 3D images do not suffer a large deterioration in the quality and highly contrasted ones can be presented.

  14. 3D imaging of nanomaterials by discrete tomography.

    PubMed

    Batenburg, K J; Bals, S; Sijbers, J; Kübel, C; Midgley, P A; Hernandez, J C; Kaiser, U; Encina, E R; Coronado, E A; Van Tendeloo, G

    2009-05-01

    The field of discrete tomography focuses on the reconstruction of samples that consist of only a few different materials. Ideally, a three-dimensional (3D) reconstruction of such a sample should contain only one grey level for each of the compositions in the sample. By exploiting this property in the reconstruction algorithm, either the quality of the reconstruction can be improved significantly, or the number of required projection images can be reduced. The discrete reconstruction typically contains fewer artifacts and does not have to be segmented, as it already contains one grey level for each composition. Recently, a new algorithm, called discrete algebraic reconstruction technique (DART), has been proposed that can be used effectively on experimental electron tomography datasets. In this paper, we propose discrete tomography as a general reconstruction method for electron tomography in materials science. We describe the basic principles of DART and show that it can be applied successfully to three different types of samples, consisting of embedded ErSi(2) nanocrystals, a carbon nanotube grown from a catalyst particle and a single gold nanoparticle, respectively. PMID:19269094

  15. 3D Soil Images Structure Quantification using Relative Entropy

    NASA Astrophysics Data System (ADS)

    Tarquis, A. M.; Gonzalez-Nieto, P. L.; Bird, N. R. A.

    2012-04-01

    Soil voids manifest the cumulative effect of local pedogenic processes and ultimately influence soil behavior - especially as it pertains to aeration and hydrophysical properties. Because of the relatively weak attenuation of X-rays by air, compared with liquids or solids, non-disruptive CT scanning has become a very attractive tool for generating three-dimensional imagery of soil voids. One of the main steps involved in this analysis is the thresholding required to transform the original (greyscale) images into the type of binary representation (e.g., pores in white, solids in black) needed for fractal analysis or simulation with Lattice-Boltzmann models (Baveye et al., 2010). The objective of the current work is to apply an innovative approach to quantifying soil voids and pore networks in original X-ray CT imagery using Relative Entropy (Bird et al., 2006; Tarquis et al., 2008). These will be illustrated using typical imagery representing contrasting soil structures. Particular attention will be given to the need to consider the full 3D context of the CT imagery, as well as scaling issues, in the application and interpretation of this index.

  16. Image-Based 3d Reconstruction and Analysis for Orthodontia

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.

    2012-08-01

    Among the main tasks of orthodontia are analysis of teeth arches and treatment planning for providing correct position for every tooth. The treatment plan is based on measurement of teeth parameters and designing perfect teeth arch curve which teeth are to create after treatment. The most common technique for teeth moving uses standard brackets which put on teeth and a wire of given shape which is clamped by these brackets for producing necessary forces to every tooth for moving it in given direction. The disadvantages of standard bracket technique are low accuracy of tooth dimensions measurements and problems with applying standard approach for wide variety of complex orthodontic cases. The image-based technique for orthodontic planning, treatment and documenting aimed at overcoming these disadvantages is proposed. The proposed approach provides performing accurate measurements of teeth parameters needed for adequate planning, designing correct teeth position and monitoring treatment process. The developed technique applies photogrammetric means for teeth arch 3D model generation, brackets position determination and teeth shifting analysis.

  17. Automated 3D renal segmentation based on image partitioning

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  18. Mapping Nearby Terrain in 3D by Use of a Grid of Laser Spots

    NASA Technical Reports Server (NTRS)

    Padgett, Curtis; Liebe, Carl; Chang, Johnny; Brown, Kenneth

    2007-01-01

    A proposed optoelectronic system, to be mounted aboard an exploratory robotic vehicle, would be used to generate a three-dimensional (3D) map of nearby terrain and obstacles for purposes of navigating the vehicle across the terrain and avoiding the obstacles. The difference between this system and the other systems would lie in the details of implementation. In this system, the illumination would be provided by a laser. The beam from the laser would pass through a two-dimensional diffraction grating, which would divide the beam into multiple beams propagating in different, fixed, known directions. These beams would form a grid of bright spots on the nearby terrain and obstacles. The centroid of each bright spot in the image would be computed. For each such spot, the combination of (1) the centroid, (2) the known direction of the light beam that produced the spot, and (3) the known baseline would constitute sufficient information for calculating the 3D position of the spot.

  19. Laser direct writing 3D structures for microfluidic channels: flow meter and mixer

    NASA Astrophysics Data System (ADS)

    Lin, Chih-Lang; Liu, Yi-Jui; Lin, Zheng-Da; Wu, Bo-Long; Lee, Yi-Hsiung; Shin, Chow-Shing; Baldeck, Patrice L.

    2015-03-01

    The 3D laser direct-writing technology is aimed at the modeling of arbitrary three-dimensional (3D) complex microstructures by scanning a laser-focusing point along predetermined trajectories. Through the perspective technique, the details of designed 3D structures can be properly fabricated in a microchannel. This study introduces a direct reading flow meter and a 3D passive mixer fabricated by laser direct writing for microfluidic applications. The flow meter consists of two rod-shaped springs, a pillar, an anchor, and a wedge-shaped indicator, installed inside a microfluidic channel. The indicator is deflected by the flowing fluid while restrained by the spring to establish an equilibrium indication according to the flow rate. The measurement is readily carried out by optical microscopy observation. The 3D passive Archimedes-screw-shaped mixer is designed to disturb the laminar flow 3D direction for enhancing the mixing efficiency. The simulation results indicate that the screw provides 3D disturbance of streamlines in the microchannel. The mixing demonstration for fluids flowing in the micrchannel approximately agrees with the simulation result. Thanks to the advantage of the laser direct writing technology, this study performs the ingenious applications of 3D structures for microchannels.

  20. Fabrication of Conductive 3D Gold-Containing Microstructures via Direct Laser Writing.

    PubMed

    Blasco, Eva; Müller, Jonathan; Müller, Patrick; Trouillet, Vanessa; Schön, Markus; Scherer, Torsten; Barner-Kowollik, Christopher; Wegener, Martin

    2016-05-01

    3D conductive microstructures containing gold are fabricated by simultaneous photopolymerization and photoreduction via direct laser writing. The photoresist employed consists of water-soluble polymers and a gold precursor. The fabricated microstructures show good conductivity and are successfully employed for 3D connections between gold pads. PMID:26953811

  1. Multiplexed 3D FRET imaging in deep tissue of live embryos

    PubMed Central

    Zhao, Ming; Wan, Xiaoyang; Li, Yu; Zhou, Weibin; Peng, Leilei

    2015-01-01

    Current deep tissue microscopy techniques are mostly restricted to intensity mapping of fluorophores, which significantly limit their applications in investigating biochemical processes in vivo. We present a deep tissue multiplexed functional imaging method that probes multiple Förster resonant energy transfer (FRET) sensors in live embryos with high spatial resolution. The method simultaneously images fluorescence lifetimes in 3D with multiple excitation lasers. Through quantitative analysis of triple-channel intensity and lifetime images, we demonstrated that Ca2+ and cAMP levels of live embryos expressing dual FRET sensors can be monitored simultaneously at microscopic resolution. The method is compatible with a broad range of FRET sensors currently available for probing various cellular biochemical functions. It opens the door to imaging complex cellular circuitries in whole live organisms. PMID:26387920

  2. Image processing of radiographs in 3D Rayleigh-Taylor decelerating interface experiments

    NASA Astrophysics Data System (ADS)

    Kuranz, C. C.; Drake, R. P.; Grosskopf, M. J.; Robey, H. F.; Remington, B. A.; Hansen, J. F.; Blue, B. E.; Knauer, J.

    2009-08-01

    This paper discusses high-energy-density laboratory astrophysics experiments exploring the Rayleigh-Taylor instability under conditions similar to the blast wave driven, outermost layer in a core-collapse supernova. The planar blast wave is created in an experimental target using the Omega laser. The blast wave crosses an unstable interface with a seed perturbation machined onto it. The perturbation consists of a 3D “egg crate” pattern and, in some cases, an additional longer wavelength mode is added to this 3D, single-mode pattern. The main diagnostic of this experiment is x-ray radiography. This paper explores an image processing technique to improve the identification and characterization of structure in the radiographic data.

  3. Investigation Into the Utilization of 3D Printing in Laser Cooling Experiments

    NASA Astrophysics Data System (ADS)

    Hazlett, Eric; Nelson, Brandon; de Leon, Sam Diaz; Shaw, Jonah

    2016-05-01

    With the advancement of 3D printing new opportunities are abound in many different fields, but with the balance between the precisions of atomic physics experiments and the material properties of current 3D printers the benefit of 3D printing technology needs to be investigated. We report on the progress of two investigations of 3D printing of benefit to atomic physics experiments: laser feedback module and the other being an optical chopper. The first investigation looks into creation of a 3D printed laser diode feedback module. This 3D printed module would allow for the quick realization of an external cavity diode laser that would have an adjustable cavity distance. We will report on the first tests of this system, by looking at Rb spectroscopy and mode-hop free tuning range as well as possibilities of using these lasers for MOT generation. We will also discuss our investigation into a 3D-printed optical chopper that utilizes an Arduino and a computer hard drive motor. By implementing an additional Arduino we create a low cost way to quickly measure laser beam waists.

  4. Investigation Into the Utilization of 3D Printing in Laser Cooling Experiments

    NASA Astrophysics Data System (ADS)

    Hazlett, Eric

    With the advancement of 3D printing new opportunities are abound in many different fields, but with the balance between the precisions of atomic physics experiments and the material properties of current 3D printers the benefit of 3D printing technology needs to be investigated. We report on the progress of two investigations of 3D printing of benefit to atomic physics experiments: laser feedback module and the other being an optical chopper. The first investigation looks into creation of a 3D printed laser diode feedback module. This 3D printed module would allow for the quick realization of an external cavity diode laser that would have an adjustable cavity distance. We will report on the first tests of this system, by looking at Rb spectroscopy and mode-hop free tuning range as well as possibilities of using these lasers for MOT generation. We will also discuss our investigation into a 3D-printed optical chopper that utilizes an Arduino and a computer hard drive motor. By implementing an additional Arduino we create a low cost way to quickly measure laser beam waists

  5. Application to monitoring of tailings dam based on 3D laser scanning technology

    NASA Astrophysics Data System (ADS)

    Ren, Fang; Zhang, Aiwu

    2011-06-01

    This paper presented a new method of monitoring of tailing dam based on 3D laser scanning technology and gave the method flow of acquiring and processing the tailing dam data. Taking the measured data for example, the author analyzed the dam deformation by generating the TIN, DEM and the curvature graph, and proved that it's feasible to global monitor the tailing dam using 3D laser scanning technology from the theory and method.

  6. Laser Provides First 3-D View of Mars' North Pole

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This first three-dimensional picture of Mars' north pole enables scientists to estimate the volume of its water ice cap with unprecedented precision, and to study its surface variations and the heights of clouds in the region for the first time.

    Approximately 2.6 million of these laser pulse measurements were assembled into a topographic grid of the north pole with a spatial resolution of 0.6 miles (one kilometer) and a vertical accuracy of 15-90 feet (5-30 meters).

    The principal investigator for MOLA is Dr. David E. Smith of Goddard. The MOLA instrument was designed and built by the Laser Remote Sensing Branch of Laboratory for Terrestrial Physics at Goddard. The Mars Global Surveyor Mission is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for the NASA Office of Space Science.

  7. Laser-assisted direct ink writing of planar and 3D metal architectures

    PubMed Central

    Skylar-Scott, Mark A.; Gunasekaran, Suman; Lewis, Jennifer A.

    2016-01-01

    The ability to pattern planar and freestanding 3D metallic architectures at the microscale would enable myriad applications, including flexible electronics, displays, sensors, and electrically small antennas. A 3D printing method is introduced that combines direct ink writing with a focused laser that locally anneals printed metallic features “on-the-fly.” To optimize the nozzle-to-laser separation distance, the heat transfer along the printed silver wire is modeled as a function of printing speed, laser intensity, and pulse duration. Laser-assisted direct ink writing is used to pattern highly conductive, ductile metallic interconnects, springs, and freestanding spiral architectures on flexible and rigid substrates. PMID:27185932

  8. Laser-assisted direct ink writing of planar and 3D metal architectures.

    PubMed

    Skylar-Scott, Mark A; Gunasekaran, Suman; Lewis, Jennifer A

    2016-05-31

    The ability to pattern planar and freestanding 3D metallic architectures at the microscale would enable myriad applications, including flexible electronics, displays, sensors, and electrically small antennas. A 3D printing method is introduced that combines direct ink writing with a focused laser that locally anneals printed metallic features "on-the-fly." To optimize the nozzle-to-laser separation distance, the heat transfer along the printed silver wire is modeled as a function of printing speed, laser intensity, and pulse duration. Laser-assisted direct ink writing is used to pattern highly conductive, ductile metallic interconnects, springs, and freestanding spiral architectures on flexible and rigid substrates. PMID:27185932

  9. Laser-assisted direct ink writing of planar and 3D metal architectures

    NASA Astrophysics Data System (ADS)

    Skylar-Scott, Mark A.; Gunasekaran, Suman; Lewis, Jennifer A.

    2016-05-01

    The ability to pattern planar and freestanding 3D metallic architectures at the microscale would enable myriad applications, including flexible electronics, displays, sensors, and electrically small antennas. A 3D printing method is introduced that combines direct ink writing with a focused laser that locally anneals printed metallic features “on-the-fly.” To optimize the nozzle-to-laser separation distance, the heat transfer along the printed silver wire is modeled as a function of printing speed, laser intensity, and pulse duration. Laser-assisted direct ink writing is used to pattern highly conductive, ductile metallic interconnects, springs, and freestanding spiral architectures on flexible and rigid substrates.

  10. An enhanced method for registration of dental surfaces partially scanned by a 3D dental laser scanning.

    PubMed

    Park, Seongjin; Kang, Ho Chul; Lee, Jeongjin; Shin, Juneseuk; Shin, Yeong Gil

    2015-01-01

    In this paper, we propose the fast and accurate registration method of partially scanned dental surfaces in a 3D dental laser scanning. To overcome the multiple point correspondence problems of conventional surface registration methods, we propose the novel depth map-based registration method to register 3D surface models. First, we convert a partially scanned 3D dental surface into a 2D image by generating the 2D depth map image of the surface model by applying a 3D rigid transformation into this model. Then, the image-based registration method using 2D depth map images accurately estimates the initial transformation between two consequently acquired surface models. To further increase the computational efficiency, we decompose the 3D rigid transformation into out-of-plane (i.e. x-, y-rotation, and z-translation) and in-plane (i.e. x-, y-translation, and z-rotation) transformations. For the in-plane transformation, we accelerate the transformation process by transforming the 2D depth map image instead of transforming the 3D surface model. For the more accurate registration of 3D surface models, we enhance iterative closest point (ICP) method for the subsequent fine registration. Our initial depth map-based registration well aligns each surface model. Therefore, our subsequent ICP method can accurately register two surface models since it is highly probable that the closest point pairs are the exact corresponding point pairs. The experimental results demonstrated that our method accurately registered partially scanned dental surfaces. Regarding the computational performance, our method delivered about 1.5 times faster registration than the conventional method. Our method can be successfully applied to the accurate reconstruction of 3D dental objects for orthodontic and prosthodontic treatment. PMID:25453381

  11. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  12. 3-D Adaptive Sparsity Based Image Compression With Applications to Optical Coherence Tomography.

    PubMed

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A; Farsiu, Sina

    2015-06-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  13. 3-D Adaptive Sparsity Based Image Compression with Applications to Optical Coherence Tomography

    PubMed Central

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A.; Farsiu, Sina

    2015-01-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  14. X-ray stereo imaging for micro 3D motions within non-transparent objects

    NASA Astrophysics Data System (ADS)

    Salih, Wasil H. M.; Buytaert, Jan A. N.; Dirckx, Joris J. J.

    2012-03-01

    We propose a new technique to measure the 3D motion of marker points along a straight path within an object using x-ray stereo projections. From recordings of two x-ray projections with 90° separation angle, the 3D coordinates of marker points can be determined. By synchronizing the x-ray exposure time to the motion event, a moving marker leaves a trace in the image of which the gray scale is linearly proportional to the marker velocity. From the gray scale along the motion path, the 3D motion (velocity) is obtained. The path of motion was reconstructed and compared with the applied waveform. The results showed that the accuracy is in order of 5%. The difference of displacement amplitude between the new method and laser vibrometry was less than 5μm. We demonstrated the method on the malleus ossicle motion in the gerbil middle ear as a function of pressure applied on the eardrum. The new method has the advantage over existing methods such as laser vibrometry that the structures under study do not need to be visually exposed. Due to the short measurement time and the high resolution, the method can be useful in the field of biomechanics for a variety of applications.

  15. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  16. 3-D Imaging Systems for Agricultural Applications—A Review

    PubMed Central

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  17. Dense 3d Point Cloud Generation from Uav Images from Image Matching and Global Optimazation

    NASA Astrophysics Data System (ADS)

    Rhee, S.; Kim, T.

    2016-06-01

    3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.

  18. Segmented images and 3D images for studying the anatomical structures in MRIs

    NASA Astrophysics Data System (ADS)

    Lee, Yong Sook; Chung, Min Suk; Cho, Jae Hyun

    2004-05-01

    For identifying the pathological findings in MRIs, the anatomical structures in MRIs should be identified in advance. For studying the anatomical structures in MRIs, an education al tool that includes the horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is necessary. Such an educational tool, however, is hard to obtain. Therefore, in this research, such an educational tool which helps medical students and doctors study the anatomical structures in MRIs was made as follows. A healthy, young Korean male adult with standard body shape was selected. Six hundred thirteen horizontal MRIs of the entire body were scanned and inputted to the personal computer. Sixty anatomical structures in the horizontal MRIs were segmented to make horizontal segmented images. Coronal, sagittal MRIs and coronal, sagittal segmented images were made. 3D images of anatomical structures in the segmented images were reconstructed by surface rendering method. Browsing software of the MRIs, segmented images, and 3D images was composed. This educational tool that includes horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is expected to help medical students and doctors study anatomical structures in MRIs.

  19. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    NASA Astrophysics Data System (ADS)

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  20. Assessment of rhinoplasty techniques by overlay of before-and-after 3D images.

    PubMed

    Toriumi, Dean M; Dixon, Tatiana K

    2011-11-01

    This article describes the equipment and software used to create facial 3D imaging and discusses the validation and reliability of the objective assessments done using this equipment. By overlaying preoperative and postoperative 3D images, it is possible to assess the surgical changes in 3D. Methods are described to assess the 3D changes from the rhinoplasty techniques of nasal dorsal augmentation, increasing tip projection, narrowing the nose, and nasal lengthening. PMID:22004862

  1. Gothic Churches in Paris ST Gervais et ST Protais Image Matching 3d Reconstruction to Understand the Vaults System Geometry

    NASA Astrophysics Data System (ADS)

    Capone, M.; Campi, M.; Catuogno, R.

    2015-02-01

    This paper is part of a research about ribbed vaults systems in French Gothic Cathedrals. Our goal is to compare some different gothic cathedrals to understand the complex geometry of the ribbed vaults. The survey isn't the main objective but it is the way to verify the theoretical hypotheses about geometric configuration of the flamboyant churches in Paris. The survey method's choice generally depends on the goal; in this case we had to study many churches in a short time, so we chose 3D reconstruction method based on image dense stereo matching. This method allowed us to obtain the necessary information to our study without bringing special equipment, such as the laser scanner. The goal of this paper is to test image matching 3D reconstruction method in relation to some particular study cases and to show the benefits and the troubles. From a methodological point of view this is our workflow: - theoretical study about geometrical configuration of rib vault systems; - 3D model based on theoretical hypothesis about geometric definition of the vaults' form; - 3D model based on image matching 3D reconstruction methods; - comparison between 3D theoretical model and 3D model based on image matching;

  2. 3-D imaging and quantitative comparison of human dentitions and simulated bite marks.

    PubMed

    Blackwell, S A; Taylor, R V; Gordon, I; Ogleby, C L; Tanijiri, T; Yoshino, M; Donald, M R; Clement, J G

    2007-01-01

    This study presents a technique developed for 3-D imaging and quantitative comparison of human dentitions and simulated bite marks. A sample of 42 study models and the corresponding bites, made by the same subjects in acrylic dental wax, were digitised by laser scanning. This technique allows image comparison of a 3-D dentition with a 3-D bite mark, eliminating distortion due to perspective as experienced in conventional photography. Cartesian co-ordinates of a series of landmarks were used to describe the dentitions and bite marks, and a matrix was created to compare all possible combinations of matches and non-matches using cross-validation techniques. An algorithm, which estimated the probability of a dentition matching its corresponding bite mark, was developed. A receiver operating characteristic graph illustrated the relationship between values for specificity and sensitivity. This graph also showed for this sample that 15% of non-matches could not be distinguished from the true match, translating to a 15% probability of falsely convicting an innocent person. PMID:16391946

  3. Formation and properties of 3D metamaterial composites fabricated using nanometer scale laser lithography (Presentation Recording)

    NASA Astrophysics Data System (ADS)

    Prokes, Sharka M.; Perkins, Frank K.; Glembocki, Orest J.

    2015-08-01

    Metamaterials designed for the visible or near IR wavelengths require patterning on the nanometer scale. To achieve this, e-beam lithography is used, but it is extremely difficult and can only produce 2D structures. A new alternative technique to produce 2D and 3D structures involves laser fabrication using the Nanoscribe 3D laser lithography system. This is a direct laser writing technique which can form arbitrary 3D nanostructures on the nanometer scale and is based on multi-photon polymerization. We are creating 2D and 3D metamaterials via this technique, and subsequently conformally coating them using Atomic Layer Deposition of oxides and Ag. We will discuss the optical properties of these novel composite structures and their potential for dual resonant metamaterials.

  4. Implementation of wireless 3D stereo image capture system and 3D exaggeration algorithm for the region of interest

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Badarch, Luubaatar

    2015-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We aslo comments on the GPU hardware and CUDA programming for implementation of 3D exaggeraion algorithm for ROI by adjusting and synthesizing the disparity value of ROI (region of interest) in real time. We comment on the pattern of aperture for deblurring of CMOS camera module based on the Kirchhoff diffraction formula and clarify the reason why we can get more sharp and clear image by blocking some portion of aperture or geometric sampling. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  5. Evolution of 3D surface imaging systems in facial plastic surgery.

    PubMed

    Tzou, Chieh-Han John; Frey, Manfred

    2011-11-01

    Recent advancements in computer technologies have propelled the development of 3D imaging systems. 3D surface-imaging is taking surgeons to a new level of communication with patients; moreover, it provides quick and standardized image documentation. This article recounts the chronologic evolution of 3D surface imaging, and summarizes the current status of today's facial surface capturing technology. This article also discusses current 3D surface imaging hardware and software, and their different techniques, technologies, and scientific validation, which provides surgeons with the background information necessary for evaluating the systems and knowledge about the systems they might incorporate into their own practice. PMID:22004854

  6. Four-view stereoscopic imaging and display system for web-based 3D image communication

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Cheol; Park, Young-Gyoo; Kim, Eun-Soo

    2004-10-01

    In this paper, a new software-oriented autostereoscopic 4-view imaging & display system for web-based 3D image communication is implemented by using 4 digital cameras, Intel Xeon server computer system, graphic card having four outputs, projection-type 4-view 3D display system and Microsoft' DirectShow programming library. And its performance is also analyzed in terms of image-grabbing frame rates, displayed image resolution, possible color depth and number of views. From some experimental results, it is found that the proposed system can display 4-view VGA images with a full color of 16bits and a frame rate of 15fps in real-time. But the image resolution, color depth, frame rate and number of views are mutually interrelated and can be easily controlled in the proposed system by using the developed software program so that, a lot of flexibility in design and implementation of the proposed multiview 3D imaging and display system are expected in the practical application of web-based 3D image communication.

  7. Imaging 3D strain field monitoring during hydraulic fracturing processes

    NASA Astrophysics Data System (ADS)

    Chen, Rongzhang; Zaghloul, Mohamed A. S.; Yan, Aidong; Li, Shuo; Lu, Guanyi; Ames, Brandon C.; Zolfaghari, Navid; Bunger, Andrew P.; Li, Ming-Jun; Chen, Kevin P.

    2016-05-01

    In this paper, we present a distributed fiber optic sensing scheme to study 3D strain fields inside concrete cubes during hydraulic fracturing process. Optical fibers embedded in concrete were used to monitor 3D strain field build-up with external hydraulic pressures. High spatial resolution strain fields were interrogated by the in-fiber Rayleigh backscattering with 1-cm spatial resolution using optical frequency domain reflectometry. The fiber optics sensor scheme presented in this paper provides scientists and engineers a unique laboratory tool to understand the hydraulic fracturing processes in various rock formations and its impacts to environments.

  8. 3D fingerprint imaging system based on full-field fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  9. Image-based RSA: Roentgen stereophotogrammetric analysis based on 2D-3D image registration.

    PubMed

    de Bruin, P W; Kaptein, B L; Stoel, B C; Reiber, J H C; Rozing, P M; Valstar, E R

    2008-01-01

    Image-based Roentgen stereophotogrammetric analysis (IBRSA) integrates 2D-3D image registration and conventional RSA. Instead of radiopaque RSA bone markers, IBRSA uses 3D CT data, from which digitally reconstructed radiographs (DRRs) are generated. Using 2D-3D image registration, the 3D pose of the CT is iteratively adjusted such that the generated DRRs resemble the 2D RSA images as closely as possible, according to an image matching metric. Effectively, by registering all 2D follow-up moments to the same 3D CT, the CT volume functions as common ground. In two experiments, using RSA and using a micromanipulator as gold standard, IBRSA has been validated on cadaveric and sawbone scapula radiographs, and good matching results have been achieved. The accuracy was: |mu |< 0.083 mm for translations and |mu| < 0.023 degrees for rotations. The precision sigma in x-, y-, and z-direction was 0.090, 0.077, and 0.220 mm for translations and 0.155 degrees , 0.243 degrees , and 0.074 degrees for rotations. Our results show that the accuracy and precision of in vitro IBRSA, performed under ideal laboratory conditions, are lower than in vitro standard RSA but higher than in vivo standard RSA. Because IBRSA does not require radiopaque markers, it adds functionality to the RSA method by opening new directions and possibilities for research, such as dynamic analyses using fluoroscopy on subjects without markers and computer navigation applications. PMID:17706656

  10. Display of travelling 3D scenes from single integral-imaging capture

    NASA Astrophysics Data System (ADS)

    Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro

    2016-06-01

    Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.

  11. Remote z-scanning with a macroscopic voice coil motor for fast 3D multiphoton laser scanning microscopy

    PubMed Central

    Rupprecht, Peter; Prendergast, Andrew; Wyart, Claire; Friedrich, Rainer W

    2016-01-01

    There is a high demand for 3D multiphoton imaging in neuroscience and other fields but scanning in axial direction presents technical challenges. We developed a focusing technique based on a remote movable mirror that is conjugate to the specimen plane and translated by a voice coil motor. We constructed cost-effective z-scanning modules from off-the-shelf components that can be mounted onto standard multiphoton laser scanning microscopes to extend scan patterns from 2D to 3D. Systems were designed for large objectives and provide high resolution, high speed and a large z-scan range (>300 μm). We used these systems for 3D multiphoton calcium imaging in the adult zebrafish brain and measured odor-evoked activity patterns across >1500 neurons with single-neuron resolution and high signal-to-noise ratio. PMID:27231612

  12. Remote z-scanning with a macroscopic voice coil motor for fast 3D multiphoton laser scanning microscopy.

    PubMed

    Rupprecht, Peter; Prendergast, Andrew; Wyart, Claire; Friedrich, Rainer W

    2016-05-01

    There is a high demand for 3D multiphoton imaging in neuroscience and other fields but scanning in axial direction presents technical challenges. We developed a focusing technique based on a remote movable mirror that is conjugate to the specimen plane and translated by a voice coil motor. We constructed cost-effective z-scanning modules from off-the-shelf components that can be mounted onto standard multiphoton laser scanning microscopes to extend scan patterns from 2D to 3D. Systems were designed for large objectives and provide high resolution, high speed and a large z-scan range (>300 μm). We used these systems for 3D multiphoton calcium imaging in the adult zebrafish brain and measured odor-evoked activity patterns across >1500 neurons with single-neuron resolution and high signal-to-noise ratio. PMID:27231612

  13. Progress in Tridimensional (3d) Laser Forming of Stainless Steel Sheets

    NASA Astrophysics Data System (ADS)

    Gisario, Annamaria; Barletta, Massimiliano; Venettacci, Simone; Veniali, Francesco

    2015-09-01

    Achievement of complex shapes with high dimensional accuracy and precision by forming process is a demanding challenge for scientists and practitioners. Available technologies are numerous, with laser forming being progressively emerging because of limited springback, lack of molds and sophisticated auxiliary equipments. However, laser forming finds limited applications, especially when forming of tridimensional (3d) complex shapes is required. In this case, cost savings are often counterbalanced by the need for troublesome forming strategies. Therefore, traditional alternatives based on mechanical devices are usually preferred to laser systems. In the present work, 3d laser forming of stainless steel sheets by high power diode laser is investigated. In particular, the set of scanning patterns to form domes from flat blanks by simple and easy-to-manage radial paths alone was found. Numerous 3d items were also processed by diode laser to manufacture a number of complex shapes with high flexibility and limited efforts to modify the auxiliary forming equipment. Based on the experimental results and analytical data, the high power diode laser was found able to form arbitrary 3d shapes through the implementation of tailored laser scanning patterns and appropriate settings of the operational parameters.

  14. 3D Coincidence Imaging Disentangles Intense Field Double Detachment of SF6(–).

    PubMed

    Kandhasamy, Durai Murugan; Albeck, Yishai; Jagtap, Krishna; Strasser, Daniel

    2015-07-23

    The efficient intense field double detachment of molecular anions observed in SF6(–) is studied by 3D coincidence imaging of the dissociation products. The dissociation anisotropy and kinetic energy release distributions are determined for the energetically lowest double detachment channel by virtue of disentangling the SF5(+) + F fragmentation products. The observed nearly isotropic dissociation with respect to the linear laser polarization and surprisingly high kinetic energy release events suggest that the dissociation occurs on a highly excited state. Rydberg (SF6(+))* states composed of a highly repulsive dication core and a Rydberg electron are proposed to explain the observed kinetic energy release, accounting also for the efficient production of all possible cationic fragments at equivalent laser intensities. PMID:26098224

  15. 3D Viability Imaging of Tumor Phantoms Treated with Single Walled Carbon Nanohorns and Photothermal Therapy

    PubMed Central

    Whitney, Jon; Dewitt, Matthew; Whited, Bryce M.; Carswell, William; Simon, Alex; Rylander, Christopher G.; Rylander, Marissa Nichole

    2013-01-01

    Objective A new image analysis method called the Spatial Phantom Evaluation of Cellular Thermal Response in Layers (SPECTRL) is presented for assessing spatial viability response to nanoparticle enhanced photothermal therapy in tissue representative phantoms. Materials and Methods Sodium alginate phantoms seeded with MDA-MB-231 breast cancer cells and single walled nanohorns were laser irradiated with an ytterbium fiber laser at a wavelength of 1064 nm and irradiance of 3.8 watts/cm2 for 10–80 seconds. SPECTRL quantitatively assessed and correlated 3D viability with spatiotemporal temperature. Results and Conclusions Based on this analysis, kill and transition zones increased from 3.7 mm3 and 13 mm3 respectively to 44.5 mm3 and 44.3 mm3 as duration was increased from 10–80 seconds. SPECTRL provides a quantitative tool for measuring precise spatial treatment regions, providing information necessary to tailor therapy protocols. PMID:23780336

  16. Filtering method for 3D laser scanning point cloud

    NASA Astrophysics Data System (ADS)

    Liu, Da; Wang, Li; Hao, Yuncai; Zhang, Jun

    2015-10-01

    In recent years, with the rapid development of the hardware and software of the three-dimensional model acquisition, three-dimensional laser scanning technology is utilized in various aspects, especially in space exploration. The point cloud filter is very important before using the data. In the paper, considering both the processing quality and computing speed, an improved mean-shift point cloud filter method is proposed. Firstly, by analyze the relevance of the normal vector between the upcoming processing point and the near points, the iterative neighborhood of the mean-shift is selected dynamically, then the high frequency noise is constrained. Secondly, considering the normal vector of the processing point, the normal vector is updated. Finally, updated position is calculated for each point, then each point is moved in the normal vector according to the updated position. The experimental results show that the large features are retained, at the same time, the small sharp features are also existed for different size and shape of objects, so the target feature information is protected precisely. The computational complexity of the proposed method is not high, it can bring high precision results with fast speed, so it is very suitable for space application. It can also be utilized in civil, such as large object measurement, industrial measurement, car navigation etc. In the future, filter with the help of point strength will be further exploited.

  17. 3-D CFD in a day - The laser digitizer project

    NASA Technical Reports Server (NTRS)

    Merriam, Marshal; Barth, Tim

    1991-01-01

    The computation of airflow over complex configurations requires a complete description of the geometry. This can be obtained from CAD data, from blueprints, or from actual models. In any case, the time required is currently estimated at 4 to 6 months. It is proposed to shorten this time by a factor of 10 to 100 through the use of automated software, a fast, highly parallel computer and a three-dimensional laser digitizer. This device can provide (x,y,z) coordinates of surface points at rates exceeding 14,500/sec. Thus, it is possible to digitize an entire model in a few minutes. The accuracy of measurement on a flat white surface is better than 0.005 inches. Higher accuracy is available at higher cost. This work discusses the challenges which remain to be addressed. In particular, the surface point data need to be converted into a surface description, the surface description needs to be made into a surface grid, and the surface grid used to make a volume grid for the flow solver. Algorithms are kept in place or in mind for all of these problems. Integration of the more mature flow solution and visualization algorithms then allows generation of solution graphics directly from a wind tunnel model.

  18. Lensfree diffractive tomography for the imaging of 3D cell cultures

    PubMed Central

    Momey, F.; Berdeu, A.; Bordy, T.; Dinten, J.-M.; Marcel, F. Kermarrec; Picollet-D’hahan, N.; Gidrol, X.; Allier, C.

    2016-01-01

    New microscopes are needed to help realize the full potential of 3D organoid culture studies. In order to image large volumes of 3D organoid cultures while preserving the ability to catch every single cell, we propose a new imaging platform based on lensfree microscopy. We have built a lensfree diffractive tomography setup performing multi-angle acquisitions of 3D organoid culture embedded in Matrigel and developed a dedicated 3D holographic reconstruction algorithm based on the Fourier diffraction theorem. With this new imaging platform, we have been able to reconstruct a 3D volume as large as 21.5 mm3 of a 3D organoid culture of prostatic RWPE1 cells showing the ability of these cells to assemble in 3D intricate cellular network at the mesoscopic scale. Importantly, comparisons with 2D images show that it is possible to resolve single cells isolated from the main cellular structure with our lensfree diffractive tomography setup. PMID:27231600

  19. Lensfree diffractive tomography for the imaging of 3D cell cultures.

    PubMed

    Momey, F; Berdeu, A; Bordy, T; Dinten, J-M; Marcel, F Kermarrec; Picollet-D'hahan, N; Gidrol, X; Allier, C

    2016-03-01

    New microscopes are needed to help realize the full potential of 3D organoid culture studies. In order to image large volumes of 3D organoid cultures while preserving the ability to catch every single cell, we propose a new imaging platform based on lensfree microscopy. We have built a lensfree diffractive tomography setup performing multi-angle acquisitions of 3D organoid culture embedded in Matrigel and developed a dedicated 3D holographic reconstruction algorithm based on the Fourier diffraction theorem. With this new imaging platform, we have been able to reconstruct a 3D volume as large as 21.5 mm (3) of a 3D organoid culture of prostatic RWPE1 cells showing the ability of these cells to assemble in 3D intricate cellular network at the mesoscopic scale. Importantly, comparisons with 2D images show that it is possible to resolve single cells isolated from the main cellular structure with our lensfree diffractive tomography setup. PMID:27231600

  20. Beat the diffraction limit in 3D direct laser writing in photosensitive glass.

    PubMed

    Bellec, Matthieu; Royon, Arnaud; Bousquet, Bruno; Bourhis, Kevin; Treguer, Mona; Cardinal, Thierry; Richardson, Martin; Canioni, Lionel

    2009-06-01

    Three-dimensional (3D) femtosecond laser direct structuring in transparent materials is widely used for photonic applications. However, the structure size is limited by the optical diffraction. Here we report on a direct laser writing technique that produces subwavelength nanostructures independently of the experimental limiting factors. We demonstrate 3D nanostructures of arbitrary patterns with feature sizes down to 80 nm, less than one tenth of the laser processing wavelength. Its ease of implementation for novel nanostructuring, with its accompanying high precision will open new opportunities for the fabrication of nanostructures for plasmonic and photonic devices and for applications in metamaterials. PMID:19506684

  1. Estimating Density Gradients and Drivers from 3D Ionospheric Imaging

    NASA Astrophysics Data System (ADS)

    Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.

    2009-12-01

    The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007

  2. Recognition Accuracy Using 3D Endoscopic Images for Superficial Gastrointestinal Cancer: A Crossover Study

    PubMed Central

    Kaise, Mitsuru; Kikuchi, Daisuke; Iizuka, Toshiro; Fukuma, Yumiko; Kuribayashi, Yasutaka; Tanaka, Masami; Toba, Takahito; Furuhata, Tsukasa; Yamashita, Satoshi; Matsui, Akira; Mitani, Toshifumi; Hoteya, Shu

    2016-01-01

    Aim. To determine whether 3D endoscopic images improved recognition accuracy for superficial gastrointestinal cancer compared with 2D images. Methods. We created an image catalog using 2D and 3D images of 20 specimens resected by endoscopic submucosal dissection. The twelve participants were allocated into two groups. Group 1 evaluated only 2D images at first, group 2 evaluated 3D images, and, after an interval of 2 weeks, group 1 next evaluated 3D and group 2 evaluated 2D images. The evaluation items were as follows: (1) diagnostic accuracy of the tumor extent and (2) confidence levels in assessing (a) tumor extent, (b) morphology, (c) microsurface structure, and (d) comprehensive recognition. Results. The use of 3D images resulted in an improvement in diagnostic accuracy in both group 1 (2D: 76.9%, 3D: 78.6%) and group 2 (2D: 79.9%, 3D: 83.6%), with no statistically significant difference. The confidence levels were higher for all items ((a) to (d)) when 3D images were used. With respect to experience, the degree of the improvement showed the following trend: novices > trainees > experts. Conclusions. By conversion into 3D images, there was a significant improvement in the diagnostic confidence level for superficial tumors, and the improvement was greater in individuals with lower endoscopic expertise. PMID:27597863

  3. Recognition Accuracy Using 3D Endoscopic Images for Superficial Gastrointestinal Cancer: A Crossover Study.

    PubMed

    Nomura, Kosuke; Kaise, Mitsuru; Kikuchi, Daisuke; Iizuka, Toshiro; Fukuma, Yumiko; Kuribayashi, Yasutaka; Tanaka, Masami; Toba, Takahito; Furuhata, Tsukasa; Yamashita, Satoshi; Matsui, Akira; Mitani, Toshifumi; Hoteya, Shu

    2016-01-01

    Aim. To determine whether 3D endoscopic images improved recognition accuracy for superficial gastrointestinal cancer compared with 2D images. Methods. We created an image catalog using 2D and 3D images of 20 specimens resected by endoscopic submucosal dissection. The twelve participants were allocated into two groups. Group 1 evaluated only 2D images at first, group 2 evaluated 3D images, and, after an interval of 2 weeks, group 1 next evaluated 3D and group 2 evaluated 2D images. The evaluation items were as follows: (1) diagnostic accuracy of the tumor extent and (2) confidence levels in assessing (a) tumor extent, (b) morphology, (c) microsurface structure, and (d) comprehensive recognition. Results. The use of 3D images resulted in an improvement in diagnostic accuracy in both group 1 (2D: 76.9%, 3D: 78.6%) and group 2 (2D: 79.9%, 3D: 83.6%), with no statistically significant difference. The confidence levels were higher for all items ((a) to (d)) when 3D images were used. With respect to experience, the degree of the improvement showed the following trend: novices > trainees > experts. Conclusions. By conversion into 3D images, there was a significant improvement in the diagnostic confidence level for superficial tumors, and the improvement was greater in individuals with lower endoscopic expertise. PMID:27597863

  4. LATIS3D: The Goal Standard for Laser-Tissue-Interaction Modeling

    NASA Astrophysics Data System (ADS)

    London, R. A.; Makarewicz, A. M.; Kim, B. M.; Gentile, N. A.; Yang, T. Y. B.

    2000-03-01

    The goal of this LDRD project has been to create LATIS3D-the world's premier computer program for laser-tissue interaction modeling. The development was based on recent experience with the 2D LATIS code and the ASCI code, KULL. With LATIS3D, important applications in laser medical therapy were researched including dynamical calculations of tissue emulsification and ablation, photothermal therapy, and photon transport for photodynamic therapy. This project also enhanced LLNL's core competency in laser-matter interactions and high-energy-density physics by pushing simulation codes into new parameter regimes and by attracting external expertise. This will benefit both existing LLNL programs such as ICF and SBSS and emerging programs in medical technology and other laser applications. The purpose of this project was to develop and apply a computer program for laser-tissue interaction modeling to aid in the development of new instruments and procedures in laser medicine.

  5. Accuracy of volume measurement using 3D ultrasound and development of CT-3D US image fusion algorithm for prostate cancer radiotherapy

    SciTech Connect

    Baek, Jihye; Huh, Jangyoung; Hyun An, So; Oh, Yoonjin; Kim, Myungsoo; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena

    2013-02-15

    Purpose: To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Methods: Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Results: Volume measurement, using 3D US, shows a 2.8 {+-} 1.5% error, 4.4 {+-} 3.0% error for CT, and 3.1 {+-} 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. Conclusions: 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.

  6. The study of craniofacial growth patterns using 3D laser scanning and geometric morphometrics

    NASA Astrophysics Data System (ADS)

    Friess, Martin

    2006-02-01

    Throughout childhood, braincase and face grow at different rates and therefore exhibit variable proportions and positions relative to each other. Our understanding of the direction and magnitude of these growth patterns is crucial for many ergonomic applications and can be improved by advanced 3D morphometrics. The purpose of this study is to investigate this known growth allometry using 3D imaging techniques. The geometry of the head and face of 840 children, aged 2 to 19, was captured with a laser surface scanner and analyzed statistically. From each scan, 18 landmarks were extracted and registered using General Procrustes Analysis (GPA). GPA eliminates unwanted variation due to position, orientation and scale by applying a least-squares superimposition algorithm to individual landmark configurations. This approach provides the necessary normalization for the study of differences in size, shape, and their interaction (allometry). The results show that throughout adolescence, boys and girls follow a different growth trajectory, leading to marked differences not only in size but also in shape, most notably in relative proportions of the braincase. These differences can be observed during early childhood, but become most noticeable after the age of 13 years, when craniofacial growth in girls slows down significantly, whereas growth in boys continues for at least 3 more years.

  7. Fabrication of 3D microfluidic structures inside glass by femtosecond laser micromachining

    NASA Astrophysics Data System (ADS)

    Sugioka, Koji; Cheng, Ya

    2014-01-01

    Femtosecond lasers have opened up new avenues in materials processing due to their unique characteristics of ultrashort pulse widths and extremely high peak intensities. One of the most important features of femtosecond laser processing is that a femtosecond laser beam can induce strong absorption in even transparent materials due to nonlinear multiphoton absorption. This makes it possible to directly create three-dimensional (3D) microfluidic structures in glass that are of great use for fabrication of biochips. For fabrication of the 3D microfluidic structures, two technical approaches are being attempted. One of them employs femtosecond laser-induced internal modification of glass followed by wet chemical etching using an acid solution (Femtosecond laser-assisted wet chemical etching), while the other one performs femtosecond laser 3D ablation of the glass in distilled water (liquid-assisted femtosecond laser drilling). This paper provides a review on these two techniques for fabrication of 3D micro and nanofluidic structures in glass based on our development and experimental results.

  8. Image enhancement and segmentation of fluid-filled structures in 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Chalana, Vikram; Dudycha, Stephen; McMorrow, Gerald

    2003-05-01

    Segmentation of fluid-filled structures, such as the urinary bladder, from three-dimensional ultrasound images is necessary for measuring their volume. This paper describes a system for image enhancement, segmentation and volume measurement of fluid-filled structures on 3D ultrasound images. The system was applied for the measurement of urinary bladder volume. Results show an average error of less than 10% in the estimation of the total bladder volume.

  9. Laser nanostructuring 3-D bioconstruction based on carbon nanotubes in a water matrix of albumin

    NASA Astrophysics Data System (ADS)

    Gerasimenko, Alexander Y.; Ichkitidze, Levan P.; Podgaetsky, Vitaly M.; Savelyev, Mikhail S.; Selishchev, Sergey V.

    2016-04-01

    3-D bioconstructions were created using the evaporation method of the water-albumin solution with carbon nanotubes (CNTs) by the continuous and pulsed femtosecond laser radiation. It is determined that the volume structure of the samples created by the femtosecond radiation has more cavities than the one created by the continuous radiation. The average diameter for multi-walled carbon nanotubes (MWCNTs) samples was almost two times higher (35-40 nm) than for single-walled carbon nanotubes (SWCNTs) samples (20-30 nm). The most homogenous 3-D bioconstruction was formed from MWCNTs by the continuous laser radiation. The hardness of such samples totaled up to 370 MPa at the nanoscale. High strength properties and the resistance of the 3-D bioconstructions produced by the laser irradiation depend on the volume nanotubes scaffold forming inside them. The scaffold was formed by the electric field of the directed laser irradiation. The covalent bond energy between the nanotube carbon molecule and the oxygen of the bovine serum albumin aminoacid residue amounts 580 kJ/mol. The 3-D bioconstructions based on MWCNTs and SWCNTs becomes overgrown with the cells (fibroblasts) over the course of 72 hours. The samples based on the both types of CNTs are not toxic for the cells and don't change its normal composition and structure. Thus the 3-D bioconstructions that are nanostructured by the pulsed and continuous laser radiation can be applied as implant materials for the recovery of the connecting tissues of the living body.

  10. Fast algorithm of 3D median filter for medical image despeckling

    NASA Astrophysics Data System (ADS)

    Xiong, Chengyi; Hou, Jianhua; Gao, Zhirong; He, Xiang; Chen, Shaoping

    2007-12-01

    Three-dimensional (3-D) median filtering is very useful to eliminate speckle noise from a medical imaging source, such as functional magnetic resonance imaging (fMRI) and ultrasonic imaging. 3-D median filtering is characterized by its higher computation complexity. N 3(N 3-1)/2 comparison operations would be required for 3-D median filtering with N×N×N window if the conventional bubble-sorting algorithm is adopted. In this paper, an efficient fast algorithm for 3-D median filtering was presented, which considerably reduced the computation complexity for extracting the median of a 3-D data array. Compared to the state-of-the-art, the proposed method could reduce the computation complexity of 3-D median filtering by 33%. It results in efficiently reducing the system delay of the 3-D median filter by software implementation, and the system cost and power consumption by hardware implementation.

  11. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system.

    PubMed

    Baumann, Michael; Mozer, Pierre; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space and thus renders optimization efficient. The method was tested on 237 prostate volumes acquired from 14 different patients for 3D to 3D and 3D to orthogonal 2D slices registration. The 3D-3D version of the algorithm converged correctly in 96.7% of all cases in 6.5s with an accuracy of 1.41mm (r.m.s.) and 3.84mm (max). The 3D to slices method yielded a success rate of 88.9% in 2.3s with an accuracy of 1.37mm (r.m.s.) and 4.3mm (max). PMID:18044549

  12. Computation of optimized arrays for 3-D electrical imaging surveys

    NASA Astrophysics Data System (ADS)

    Loke, M. H.; Wilkinson, P. B.; Uhlemann, S. S.; Chambers, J. E.; Oxby, L. S.

    2014-12-01

    3-D electrical resistivity surveys and inversion models are required to accurately resolve structures in areas with very complex geology where 2-D models might suffer from artefacts. Many 3-D surveys use a grid where the number of electrodes along one direction (x) is much greater than in the perpendicular direction (y). Frequently, due to limitations in the number of independent electrodes in the multi-electrode system, the surveys use a roll-along system with a small number of parallel survey lines aligned along the x-direction. The `Compare R' array optimization method previously used for 2-D surveys is adapted for such 3-D surveys. Offset versions of the inline arrays used in 2-D surveys are included in the number of possible arrays (the comprehensive data set) to improve the sensitivity to structures in between the lines. The array geometric factor and its relative error are used to filter out potentially unstable arrays in the construction of the comprehensive data set. Comparisons of the conventional (consisting of dipole-dipole and Wenner-Schlumberger arrays) and optimized arrays are made using a synthetic model and experimental measurements in a tank. The tests show that structures located between the lines are better resolved with the optimized arrays. The optimized arrays also have significantly better depth resolution compared to the conventional arrays.

  13. Estimation of single cell volume from 3D confocal images using automatic data processing

    NASA Astrophysics Data System (ADS)

    Chorvatova, A.; Cagalinec, M.; Mateasik, A.; Chorvat, D., Jr.

    2012-06-01

    Cardiac cells are highly structured with a non-uniform morphology. Although precise estimation of their volume is essential for correct evaluation of hypertrophic changes of the heart, simple and unified techniques that allow determination of the single cardiomyocyte volume with sufficient precision are still limited. Here, we describe a novel approach to assess the cell volume from confocal microscopy 3D images of living cardiac myocytes. We propose a fast procedure based on segementation using active deformable contours. This technique is independent on laser gain and/or pinhole settings and it is also applicable on images of cells stained with low fluorescence markers. Presented approach is a promising new tool to investigate changes in the cell volume during normal, as well as pathological growth, as we demonstrate in the case of cell enlargement during hypertension in rats.

  14. Imaging system for creating 3D block-face cryo-images of whole mice

    NASA Astrophysics Data System (ADS)

    Roy, Debashish; Breen, Michael; Salvado, Olivier; Heinzel, Meredith; McKinley, Eliot; Wilson, David

    2006-03-01

    We developed a cryomicrotome/imaging system that provides high resolution, high sensitivity block-face images of whole mice or excised organs, and applied it to a variety of biological applications. With this cryo-imaging system, we sectioned cryo-preserved tissues at 2-40 μm thickness and acquired high resolution brightfield and fluorescence images with microscopic in-plane resolution (as good as 1.2 μm). Brightfield images of normal and pathological anatomy show exquisite detail, especially in the abdominal cavity. Multi-planar reformatting and 3D renderings allow one to interrogate 3D structures. In this report, we present brightfield images of mouse anatomy, as well as 3D renderings of organs. For BPK mice model of polycystic kidney disease, we compared brightfield cryo-images and kidney volumes to MRI. The color images provided greater contrast and resolution of cysts as compared to in vivo MRI. We note that color cryo-images are closer to what a researcher sees in dissection, making it easier for them to interpret image data. The combination of field of view, depth of field, ultra high resolution and color/fluorescence contrast enables cryo-image volumes to provide details that cannot be found through in vivo imaging or other ex vivo optical imaging approaches. We believe that this novel imaging system will have applications that include identification of mouse phenotypes, characterization of diseases like blood vessel disease, kidney disease, and cancer, assessment of drug and gene therapy delivery and efficacy and validation of other imaging modalities.

  15. Laser fabrication of 2D and 3D metal nanoparticle structures and arrays.

    PubMed

    Kuznetsov, A I; Kiyan, R; Chichkov, B N

    2010-09-27

    A novel method for fabrication of 2D and 3D metal nanoparticle structures and arrays is proposed. This technique is based on laser-induced transfer of molten metal nanodroplets from thin metal films. Metal nanoparticles are produced by solidification of these nanodroplets. The size of the transferred nanoparticles can be controllably changed in the range from 180 nm to 1500 nm. Several examples of complex 2D and 3D microstructures generated form gold nanoparticles are demonstrated. PMID:20941016

  16. 3D prostate segmentation of ultrasound images combining longitudinal image registration and machine learning

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Fei, Baowei

    2012-02-01

    We developed a three-dimensional (3D) segmentation method for transrectal ultrasound (TRUS) images, which is based on longitudinal image registration and machine learning. Using longitudinal images of each individual patient, we register previously acquired images to the new images of the same subject. Three orthogonal Gabor filter banks were used to extract texture features from each registered image. Patient-specific Gabor features from the registered images are used to train kernel support vector machines (KSVMs) and then to segment the newly acquired prostate image. The segmentation method was tested in TRUS data from five patients. The average surface distance between our and manual segmentation is 1.18 +/- 0.31 mm, indicating that our automatic segmentation method based on longitudinal image registration is feasible for segmenting the prostate in TRUS images.

  17. The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors

    NASA Astrophysics Data System (ADS)

    Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.

    2015-12-01

    Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and

  18. 3D imaging of amplitude objects embedded in phase objects using transport of intensity

    NASA Astrophysics Data System (ADS)

    Banerjee, Partha; Basunia, Mahmudunnabi

    2015-09-01

    The amplitude and phase of the complex optical field in the Helmholtz equation obey a pair of coupled equations, arising from equating the real and imaginary parts. The imaginary part yields the transport of intensity equation (TIE), which can be used to derive the phase distribution at the observation plane. If a phase object is approximately imaged on the recording plane(s), TIE yields the phase without the need for phase unwrapping. In our experiment, the 3D image of a phase object and an amplitude object embedded in a phase object is recovered. The phase object is created by heating a liquid, comprising a solution of red dye in alcohol, using a focused 514 nm laser beam to the point where self-phase modulation of the beam is observed. The optical intensities are recorded at various planes during propagation of a low power 633 nm laser beam through the liquid. In the process of applying TIE to derive the phase at the observation plane, the real part of the complex equation is also examined as a cross-check of our calculations. For pure phase objects, it is shown that the real part of the complex equation is best satisfied around the image plane. Alternatively, it is proposed that this information can be used to determine the optimum image plane.

  19. Increasing the depth of field in Multiview 3D images

    NASA Astrophysics Data System (ADS)

    Lee, Beom-Ryeol; Son, Jung-Young; Yano, Sumio; Jung, Ilkwon

    2016-06-01

    A super-multiview condition simulator which can project up to four different view images to each eye is introduced. This simulator with the image having both disparity and perspective informs that the depth of field (DOF) will be extended to more than the default DOF values as the number of simultaneously but separately projected different view images to each eye increase. The DOF range can be extended to near 2 diopters with the four simultaneous view images. However, the DOF value increments are not prominent as the image with both disparity and perspective with the image with disparity only.

  20. Absorption spectrum of the laser-populated 3D metastable levels in barium

    NASA Technical Reports Server (NTRS)

    Carlsten, J. L.; Mcilrath, T. J.; Parkinson, W. H.

    1975-01-01

    This paper deals with the details of the absorption spectrum of the 3D metastable term in barium. The 3D term was selectively populated with a tuneable dye laser. The fundamental triplet series (6s5d 3D-6snf 3F) is identified and extended out to n = 32. In addition, the absolute photoionization cross section was measured at 303 nm. The relative cross section from 303 to 250 nm was also measured with the absolute scale set by the measurement at 303 nm and was found to be nearly constant in the wavelength region measured.

  1. 3D-resolved fluorescence and phosphorescence lifetime imaging using temporal focusing wide-field two-photon excitation

    PubMed Central

    Choi, Heejin; Tzeranis, Dimitrios S.; Cha, Jae Won; Clémenceau, Philippe; de Jong, Sander J. G.; van Geest, Lambertus K.; Moon, Joong Ho; Yannas, Ioannis V.; So, Peter T. C.

    2012-01-01

    Fluorescence and phosphorescence lifetime imaging are powerful techniques for studying intracellular protein interactions and for diagnosing tissue pathophysiology. While lifetime-resolved microscopy has long been in the repertoire of the biophotonics community, current implementations fall short in terms of simultaneously providing 3D resolution, high throughput, and good tissue penetration. This report describes a new highly efficient lifetime-resolved imaging method that combines temporal focusing wide-field multiphoton excitation and simultaneous acquisition of lifetime information in frequency domain using a nanosecond gated imager from a 3D-resolved plane. This approach is scalable allowing fast volumetric imaging limited only by the available laser peak power. The accuracy and performance of the proposed method is demonstrated in several imaging studies important for understanding peripheral nerve regeneration processes. Most importantly, the parallelism of this approach may enhance the imaging speed of long lifetime processes such as phosphorescence by several orders of magnitude. PMID:23187477

  2. D3D augmented reality imaging system: proof of concept in mammography

    PubMed Central

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Purpose The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D) augmented reality”. Materials and methods A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. PMID:27563261

  3. Fabrication of 3D solenoid microcoils in silica glass by femtosecond laser wet etch and microsolidics

    NASA Astrophysics Data System (ADS)

    Meng, Xiangwei; Yang, Qing; Chen, Feng; Shan, Chao; Liu, Keyin; Li, Yanyang; Bian, Hao; Du, Guangqing; Hou, Xun

    2015-02-01

    This paper reports a flexible fabrication method for 3D solenoid microcoils in silica glass. The method consists of femtosecond laser wet etching (FLWE) and microsolidics process. The 3D microchannel with high aspect ratio is fabricated by an improved FLWE method. In the microsolidics process, an alloy was chosen as the conductive metal. The microwires are achieved by injecting liquid alloy into the microchannel, and allowing the alloy to cool and solidify. The alloy microwires with high melting point can overcome the limitation of working temperature and improve the electrical property. The geometry, the height and diameter of microcoils were flexibly fabricated by the pre-designed laser writing path, the laser power and etching time. The 3D microcoils can provide uniform magnetic field and be widely integrated in many magnetic microsystems.

  4. Fast fully 3-D image reconstruction in PET using planograms.

    PubMed

    Brasse, D; Kinahan, P E; Clackdoyle, R; Defrise, M; Comtat, C; Townsend, D W

    2004-04-01

    We present a method of performing fast and accurate three-dimensional (3-D) backprojection using only Fourier transform operations for line-integral data acquired by planar detector arrays in positron emission tomography. This approach is a 3-D extension of the two-dimensional (2-D) linogram technique of Edholm. By using a special choice of parameters to index a line of response (LOR) for a pair of planar detectors, rather than the conventional parameters used to index a LOR for a circular tomograph, all the LORs passing through a point in the field of view (FOV) lie on a 2-D plane in the four-dimensional (4-D) data space. Thus, backprojection of all the LORs passing through a point in the FOV corresponds to integration of a 2-D plane through the 4-D "planogram." The key step is that the integration along a set of parallel 2-D planes through the planogram, that is, backprojection of a plane of points, can be replaced by a 2-D section through the origin of the 4-D Fourier transform of the data. Backprojection can be performed as a sequence of Fourier transform operations, for faster implementation. In addition, we derive the central-section theorem for planogram format data, and also derive a reconstruction filter for both backprojection-filtering and filtered-backprojection reconstruction algorithms. With software-based Fourier transform calculations we provide preliminary comparisons of planogram backprojection to standard 3-D backprojection and demonstrate a reduction in computation time by a factor of approximately 15. PMID:15084067

  5. Flash trajectory imaging of target 3D motion

    NASA Astrophysics Data System (ADS)

    Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang

    2011-03-01

    We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.

  6. [3D Super-resolution Reconstruction and Visualization of Pulmonary Nodules from CT Image].

    PubMed

    Wang, Bing; Fan, Xing; Yang, Ying; Tian, Xuedong; Gu, Lixu

    2015-08-01

    The aim of this study was to propose an algorithm for three-dimensional projection onto convex sets (3D POCS) to achieve super resolution reconstruction of 3D lung computer tomography (CT) images, and to introduce multi-resolution mixed display mode to make 3D visualization of pulmonary nodules. Firstly, we built the low resolution 3D images which have spatial displacement in sub pixel level between each other and generate the reference image. Then, we mapped the low resolution images into the high resolution reference image using 3D motion estimation and revised the reference image based on the consistency constraint convex sets to reconstruct the 3D high resolution images iteratively. Finally, we displayed the different resolution images simultaneously. We then estimated the performance of provided method on 5 image sets and compared them with those of 3 interpolation reconstruction methods. The experiments showed that the performance of 3D POCS algorithm was better than that of 3 interpolation reconstruction methods in two aspects, i.e., subjective and objective aspects, and mixed display mode is suitable to the 3D visualization of high resolution of pulmonary nodules. PMID:26710449

  7. 3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles

    NASA Astrophysics Data System (ADS)

    Doerschuk, Peter C.; Johnson, John E.

    2000-11-01

    A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.

  8. 3D image display of fetal ultrasonic images by thin shell

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

    1999-05-01

    Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

  9. Infrared imaging of the polymer 3D-printing process

    NASA Astrophysics Data System (ADS)

    Dinwiddie, Ralph B.; Kunc, Vlastimil; Lindal, John M.; Post, Brian; Smith, Rachel J.; Love, Lonnie; Duty, Chad E.

    2014-05-01

    Both mid-wave and long-wave IR cameras are used to measure various temperature profiles in thermoplastic parts as they are printed. Two significantly different 3D-printers are used in this study. The first is a small scale commercially available Solidoodle 3 printer, which prints parts with layer thicknesses on the order of 125μm. The second printer used is a "Big Area Additive Manufacturing" (BAAM) 3D-printer developed at Oak Ridge National Laboratory. The BAAM prints parts with a layer thicknesses of 4.06 mm. Of particular interest is the temperature of the previously deposited layer as the new hot layer is about to be extruded onto it. The two layers are expected have a stronger bond if the temperature of the substrate layer is above the glass transition temperature. This paper describes the measurement technique and results for a study of temperature decay and substrate layer temperature for ABS thermoplastic with and without the addition of chopped carbon fibers.

  10. 3D reconstruction and characterization of laser induced craters by in situ optical microscopy

    NASA Astrophysics Data System (ADS)

    Casal, A.; Cerrato, R.; Mateo, M. P.; Nicolas, G.

    2016-06-01

    A low-cost optical microscope was developed and coupled to an irradiation system in order to study the induced effects on material during a multipulse regime by an in situ visual inspection of the surface, in particular of the spot generated at different pulses. In the case of laser ablation, a reconstruction of the crater in 3D was made from the images of the sample surface taken during the irradiation process, and the subsequent profiles of ablated material were extracted. The implementation of this homemade optical device gives an added value to the irradiation system, providing information about morphology evolution of irradiated area when successive pulses are applied. In particular, the determination of ablation rates in real time can be especially useful for a better understanding and controlling of the ablation process in applications where removal of material is involved, such as laser cleaning and in-depth characterization of multilayered samples and diffusion processes. The validation of the developed microscope was made by a comparison with a commercial confocal microscope configured for the characterization of materials where similar results of crater depth and diameter were obtained for both systems.

  11. Quality Prediction of Asymmetrically Distorted Stereoscopic 3D Images.

    PubMed

    Wang, Jiheng; Rehman, Abdul; Zeng, Kai; Wang, Shiqi; Wang, Zhou

    2015-11-01

    Objective quality assessment of distorted stereoscopic images is a challenging problem, especially when the distortions in the left and right views are asymmetric. Existing studies suggest that simply averaging the quality of the left and right views well predicts the quality of symmetrically distorted stereoscopic images, but generates substantial prediction bias when applied to asymmetrically distorted stereoscopic images. In this paper, we first build a database that contains both single-view and symmetrically and asymmetrically distorted stereoscopic images. We then carry out a subjective test, where we find that the quality prediction bias of the asymmetrically distorted images could lean toward opposite directions (overestimate or underestimate), depending on the distortion types and levels. Our subjective test also suggests that eye dominance effect does not have strong impact on the visual quality decisions of stereoscopic images. Furthermore, we develop an information content and divisive normalization-based pooling scheme that improves upon structural similarity in estimating the quality of single-view images. Finally, we propose a binocular rivalry-inspired multi-scale model to predict the quality of stereoscopic images from that of the single-view images. Our results show that the proposed model, without explicitly identifying image distortion types, successfully eliminates the prediction bias, leading to significantly improved quality prediction of the stereoscopic images. PMID:26087491

  12. Burr-like, laser-made 3D microscaffolds for tissue spheroid encagement.

    PubMed

    Danilevicius, Paulius; Rezende, Rodrigo A; Pereira, Frederico D A S; Selimis, Alexandros; Kasyanov, Vladimir; Noritomi, Pedro Y; da Silva, Jorge V L; Chatzinikolaidou, Maria; Farsari, Maria; Mironov, Vladimir

    2015-01-01

    The modeling, fabrication, cell loading, and mechanical and in vitro biological testing of biomimetic, interlockable, laser-made, concentric 3D scaffolds are presented. The scaffolds are made by multiphoton polymerization of an organic-inorganic zirconium silicate. Their mechanical properties are theoretically modeled using finite elements analysis and experimentally measured using a Microsquisher(®). They are subsequently loaded with preosteoblastic cells, which remain live after 24 and 72 h. The interlockable scaffolds have maintained their ability to fuse with tissue spheroids. This work represents a novel technological platform, enabling the rapid, laser-based, in situ 3D tissue biofabrication. PMID:26104190

  13. 3D printing of weft knitted textile based structures by selective laser sintering of nylon powder

    NASA Astrophysics Data System (ADS)

    Beecroft, M.

    2016-07-01

    3D printing is a form of additive manufacturing whereby the building up of layers of material creates objects. The selective laser sintering process (SLS) uses a laser beam to sinter powdered material to create objects. This paper builds upon previous research into 3D printed textile based material exploring the use of SLS using nylon powder to create flexible weft knitted structures. The results show the potential to print flexible textile based structures that exhibit the properties of traditional knitted textile structures along with the mechanical properties of the material used, whilst describing the challenges regarding fineness of printing resolution. The conclusion highlights the potential future development and application of such pieces.

  14. Multi-layer 3D imaging using a few viewpoint images and depth map

    NASA Astrophysics Data System (ADS)

    Suginohara, Hidetsugu; Sakamoto, Hirotaka; Yamanaka, Satoshi; Suyama, Shiro; Yamamoto, Hirotsugu

    2015-03-01

    In this paper, we propose a new method that makes multi-layer images from a few viewpoint images to display a 3D image by the autostereoscopic display that has multiple display screens in the depth direction. We iterate simple "Shift and Subtraction" processes to make each layer image alternately. The image made in accordance with depth map like a volume slicing by gradations is used as the initial solution of iteration process. Through the experiments using the prototype stacked two LCDs, we confirmed that it was enough to make multi-layer images from three viewpoint images to display a 3D image. Limiting the number of viewpoint images, the viewing area that allows stereoscopic view becomes narrow. To broaden the viewing area, we track the head motion of the viewer and update screen images in real time so that the viewer can maintain correct stereoscopic view within +/- 20 degrees area. In addition, we render pseudo multiple viewpoint images using depth map, then we can generate motion parallax at the same time.

  15. The foundation of 3D geometry model in omni-directional laser warning system based on diffuse reflection detection

    NASA Astrophysics Data System (ADS)

    Zhang, Weian; Wang, Long; Dong, Qixin

    2011-06-01

    The omni-directional laser warning equipment based on infrared fish-eye lens and short-wave infrared FPA has been used to protect large-scale targets, which can detect the threat laser scattered by the attacked targets or the objects surrounding them, and image the laser spot on FPA, then fix the position of spot. The application offsets the disadvantage of direct interception warner which need disposed largely. Before study of imaging mechanism about the scattered laser spot, the definition of geometry relationship is needed firstly. In this paper we developed a 3D geometry model by analyzing the position relationships in typical battlefield environment among the enemy's threat laser source, the laser spot radiated on one flat surface and our omni-directional laser warning fish-eye lens. The model including R, α, β, d, θ, φ, ψ, δ etc. 8 parameters and 4 coordinate systems was suitable for any general situations. After achievement of the model foundation, we obtained analytic expression of the laser spot contour on flat surface, then attained analytic expression of spot contour on image surface by calculating the object space half-field angle and the azimuth angle relative to fish-eye lens of an arbitrary point at the spot edge on flat surface. The attainment of the expression makes possible that we can analyze the spot energy distributions on image surface and the imaging characteristic of the scattered laser spot via fish-eye lens, then can compute the transmission direction of the threat laser. The foundation of the model in this paper has an importantly basic and guiding meaning to the latter research on this aspect.

  16. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability. PMID:25465067

  17. Perceptual quality measurement of 3D images based on binocular vision.

    PubMed

    Zhou, Wujie; Yu, Lu

    2015-07-20

    Three-dimensional (3D) technology has become immensely popular in recent years and widely adopted in various applications. Hence, perceptual quality measurement of symmetrically and asymmetrically distorted 3D images has become an important, fundamental, and challenging issue in 3D imaging research. In this paper, we propose a binocular-vision-based 3D image-quality measurement (IQM) metric. Consideration of the 3D perceptual properties of the primary visual cortex (V1) and the higher visual areas (V2) for 3D-IQM is the major technical contribution to this research. To be more specific, first, the metric simulates the receptive fields of complex cells (V1) using binocular energy response and binocular rivalry response and the higher visual areas (V2) using local binary patterns features. Then, three similarity scores of 3D perceptual properties between the reference and distorted 3D images are measured. Finally, by using support vector regression, three similarity scores are integrated into an overall 3D quality score. Experimental results for two public benchmark databases demonstrate that, in comparison with most current 2D and 3D metrics, the proposed metric achieves significantly higher consistency in alignment with subjective fidelity ratings. PMID:26367842

  18. 3-D Target Location from Stereoscopic SAR Images

    SciTech Connect

    DOERRY,ARMIN W.

    1999-10-01

    SAR range-Doppler images are inherently 2-dimensional. Targets with a height offset lay over onto offset range and azimuth locations. Just which image locations are laid upon depends on the imaging geometry, including depression angle, squint angle, and target bearing. This is the well known layover phenomenon. Images formed with different aperture geometries will exhibit different layover characteristics. These differences can be exploited to ascertain target height information, in a stereoscopic manner. Depending on the imaging geometries, height accuracy can be on the order of horizontal position accuracies, thereby rivaling the best IFSAR capabilities in fine resolution SAR images. All that is required for this to work are two distinct passes with suitably different geometries from any plain old SAR.

  19. Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner

    NASA Astrophysics Data System (ADS)

    Lee, Kisung; Kinahan, Paul E.; Fessler, Jeffrey A.; Miyaoka, Robert S.; Janes, Marie; Lewellen, Tom K.

    2004-10-01

    We present a pragmatic approach to image reconstruction for data from the micro crystal elements system (MiCES) fully 3D mouse imaging positron emission tomography (PET) scanner under construction at the University of Washington. Our approach is modelled on fully 3D image reconstruction used in clinical PET scanners, which is based on Fourier rebinning (FORE) followed by 2D iterative image reconstruction using ordered-subsets expectation-maximization (OSEM). The use of iterative methods allows modelling of physical effects (e.g., statistical noise, detector blurring, attenuation, etc), while FORE accelerates the reconstruction process by reducing the fully 3D data to a stacked set of independent 2D sinograms. Previous investigations have indicated that non-stationary detector point-spread response effects, which are typically ignored for clinical imaging, significantly impact image quality for the MiCES scanner geometry. To model the effect of non-stationary detector blurring (DB) in the FORE+OSEM(DB) algorithm, we have added a factorized system matrix to the ASPIRE reconstruction library. Initial results indicate that the proposed approach produces an improvement in resolution without an undue increase in noise and without a significant increase in the computational burden. The impact on task performance, however, remains to be evaluated.

  20. Multiview image integration system for glassless 3D display

    NASA Astrophysics Data System (ADS)

    Ando, Takahisa; Mashitani, Ken; Higashino, Masahiro; Kanayama, Hideyuki; Murata, Haruhiko; Funazou, Yasuo; Sakamoto, Naohisa; Hazama, Hiroshi; Ebara, Yasuo; Koyamada, Koji

    2005-03-01

    We have developed a multi-view image integration system, which combines seven parallax video images into a single video image so that it fits the parallax barrier. The apertures of this barrier are not stripes but tiny rectangles that are arranged in the shape of stairs. Commodity hardware is used to satisfy a specification which requires that the resolution of each parallax video image is SXGA(1645×800 pixel resolution), the resulting integrated image is QUXGA-W(3840×2400 pixel resolution), and the frame rate is fifteen frames per second. The point is that the system can provide with QUXGA-W video image, which corresponds to 27MB, at 15fps, that is about 2Gbps. Using the integration system and a Liquid Crystal Display with the parallax barrier, we can enjoy an immersive live video image which supports seven viewpoints without special glasses. In addition, since the system can superimpose the CG images of the relevant seven viewpoints into the live video images, it is possible to communicate with remote users by sharing a virtual object.

  1. Efficient RPG detection in noisy 3D image data

    NASA Astrophysics Data System (ADS)

    Pipitone, Frank

    2011-06-01

    We address the automatic detection of Ambush weapons such as rocket propelled grenades (RPGs) from range data which might be derived from multiple camera stereo with textured illumination or by other means. We describe our initial work in a new project involving the efficient acquisition of 3D scene data as well as discrete point invariant techniques to perform real time search for threats to a convoy. The shapes of the jump boundaries in the scene are exploited in this paper, rather than on-surface points, due to the large error typical of depth measurement at long range and the relatively high resolution obtainable in the transverse direction. We describe examples of the generation of a novel range-scaled chain code for detecting and matching jump boundaries.

  2. Improvement of integral 3D image quality by compensating for lens position errors

    NASA Astrophysics Data System (ADS)

    Okui, Makoto; Arai, Jun; Kobayashi, Masaki; Okano, Fumio

    2004-05-01

    Integral photography (IP) or integral imaging is a way to create natural-looking three-dimensional (3-D) images with full parallax. Integral three-dimensional television (integral 3-D TV) uses a method that electronically presents 3-D images in real time based on this IP method. The key component is a lens array comprising many micro-lenses for shooting and displaying. We have developed a prototype device with about 18,000 lenses using a super-high-definition camera with 2,000 scanning lines. Positional errors of these high-precision lenses as well as the camera's lenses will cause distortions in the elemental image, which directly affect the quality of the 3-D image and the viewing area. We have devised a way to compensate for such geometrical position errors and used it for the integral 3-D TV prototype, resulting in an improvement in both viewing zone and picture quality.

  3. Simultaneous acquisition of 3D shape and deformation by combination of interferometric and correlation-based laser speckle metrology

    PubMed Central

    Dekiff, Markus; Berssenbrügge, Philipp; Kemper, Björn; Denz, Cornelia; Dirksen, Dieter

    2015-01-01

    A metrology system combining three laser speckle measurement techniques for simultaneous determination of 3D shape and micro- and macroscopic deformations is presented. While microscopic deformations are determined by a combination of Digital Holographic Interferometry (DHI) and Digital Speckle Photography (DSP), macroscopic 3D shape, position and deformation are retrieved by photogrammetry based on digital image correlation of a projected laser speckle pattern. The photogrammetrically obtained data extend the measurement range of the DHI-DSP system and also increase the accuracy of the calculation of the sensitivity vector. Furthermore, a precise assignment of microscopic displacements to the object’s macroscopic shape for enhanced visualization is achieved. The approach allows for fast measurements with a simple setup. Key parameters of the system are optimized, and its precision and measurement range are demonstrated. As application examples, the deformation of a mandible model and the shrinkage of dental impression material are measured. PMID:26713197

  4. Simultaneous acquisition of 3D shape and deformation by combination of interferometric and correlation-based laser speckle metrology.

    PubMed

    Dekiff, Markus; Berssenbrügge, Philipp; Kemper, Björn; Denz, Cornelia; Dirksen, Dieter

    2015-12-01

    A metrology system combining three laser speckle measurement techniques for simultaneous determination of 3D shape and micro- and macroscopic deformations is presented. While microscopic deformations are determined by a combination of Digital Holographic Interferometry (DHI) and Digital Speckle Photography (DSP), macroscopic 3D shape, position and deformation are retrieved by photogrammetry based on digital image correlation of a projected laser speckle pattern. The photogrammetrically obtained data extend the measurement range of the DHI-DSP system and also increase the accuracy of the calculation of the sensitivity vector. Furthermore, a precise assignment of microscopic displacements to the object's macroscopic shape for enhanced visualization is achieved. The approach allows for fast measurements with a simple setup. Key parameters of the system are optimized, and its precision and measurement range are demonstrated. As application examples, the deformation of a mandible model and the shrinkage of dental impression material are measured. PMID:26713197

  5. 3D registration through pseudo x-ray image generation.

    PubMed

    Viant, W J; Barnel, F

    2001-01-01

    Registration of a pre operative plan with the intra operative position of the patient is still a largely unsolved problem. Current techniques generally require fiducials, either artificial or anatomic, to achieve the registration solution. Invariably these fiducials require implantation and/or direct digitisation. The technique described in this paper requires no digitisation or implantation of fiducials, but instead relies on the shape and form of the anatomy through a fully automated image comparison process. A pseudo image, generated from a virtual image intensifier's view of a CT dataset, is intra operatively compared with a real x-ray image. The principle is to align the virtual with the real image intensifier. The technique is an extension to the work undertaken by Domergue [1] and based on original ideas by Weese [4]. PMID:11317805

  6. Temperature distributions in the laser-heated diamond anvil cell from 3-D numerical modeling

    SciTech Connect

    Rainey, E. S. G.; Kavner, A.; Hernlund, J. W.

    2013-11-28

    We present TempDAC, a 3-D numerical model for calculating the steady-state temperature distribution for continuous wave laser-heated experiments in the diamond anvil cell. TempDAC solves the steady heat conduction equation in three dimensions over the sample chamber, gasket, and diamond anvils and includes material-, temperature-, and direction-dependent thermal conductivity, while allowing for flexible sample geometries, laser beam intensity profile, and laser absorption properties. The model has been validated against an axisymmetric analytic solution for the temperature distribution within a laser-heated sample. Example calculations illustrate the importance of considering heat flow in three dimensions for the laser-heated diamond anvil cell. In particular, we show that a “flat top” input laser beam profile does not lead to a more uniform temperature distribution or flatter temperature gradients than a wide Gaussian laser beam.

  7. LATIS3D: The Gold Standard for Laser-Tissue-Interaction Modeling

    SciTech Connect

    London, R.A.; Makarewicz, A.M.; Kim, B.M.; Gentile, N.A.; Yang, Y.B.; Brlik, M.; Vincent, L.

    2000-02-29

    The goal of this LDRD project has been to create LATIS3D--the world's premier computer program for laser-tissue interaction modeling. The development was based on recent experience with the 2D LATIS code and the ASCI code, KULL. With LATIS3D, important applications in laser medical therapy were researched including dynamical calculations of tissue emulsification and ablation, photothermal therapy, and photon transport for photodynamic therapy. This project also enhanced LLNL's core competency in laser-matter interactions and high-energy-density physics by pushing simulation codes into new parameter regimes and by attracting external expertise. This will benefit both existing LLNL programs such as ICF and SBSS and emerging programs in medical technology and other laser applications.

  8. Black silicon: substrate for laser 3D micro/nano-polymerization.

    PubMed

    Žukauskas, Albertas; Malinauskas, Mangirdas; Kadys, Arūnas; Gervinskas, Gediminas; Seniutinas, Gediminas; Kandasamy, Sasikaran; Juodkazis, Saulius

    2013-03-25

    We demonstrate that black silicon (b-Si) made by dry plasma etching is a promising substrate for laser three-dimensional (3D) micro/nano-polymerization. High aspect ratio Si-needles, working as sacrificial support structures, have flexibility required to relax interface stresses between substrate and the polymerized micro-/nano- objects. Surface of b-Si can be made electrically conductive by metal deposition and, at the same time, can preserve low optical reflectivity beneficial for polymerization by direct laser writing. 3D laser polymerization usually performed at the irradiation conditions close to the dielectric breakdown is possible on non-reflective and not metallic surfaces. Here we show that low reflectivity and high metallic conductivity are not counter- exclusive properties for laser polymerization. Electrical conductivity of substrate and its permeability in liquids are promising for bio- and electroplating applications. PMID:23546073

  9. Study of 3D Laser Cladding for Ni85Al15 Superalloy

    NASA Astrophysics Data System (ADS)

    Kotoban, D.; Grigoriev, S.; Shishkovsky, I.

    Conditions of successful3D laser cladding for Ni based superalloy were studied. A high power Yb-YAG laser was used to create a molten pool on a stainless steel substrate into which Ni85Al15 powder stream was delivered to create 3D samples. The effect of different laser parameters on the structure and the intermetallic phase content of the manufactured samples were explored by optical metallography, microhardness, SEM, X-ray, and EDX analysis. The cladding of the Ni3A1 coating with small dilution into substrate can be obtained at the appropriate power density of about 2-8 J/mm2 under the laser scan velocity of 100-200 mm/min and the powder feed rate ∼ 3.8 g/min.

  10. Thoracic Pedicle Screw Placement Guide Plate Produced by Three-Dimensional (3-D) Laser Printing.

    PubMed

    Chen, Hongliang; Guo, Kaijing; Yang, Huilin; Wu, Dongying; Yuan, Feng

    2016-01-01

    BACKGROUND The aim of this study was to evaluate the accuracy and feasibility of an individualized thoracic pedicle screw placement guide plate produced by 3-D laser printing. MATERIAL AND METHODS Thoracic pedicle samples of 3 adult cadavers were randomly assigned for 3-D CT scans. The 3-D thoracic models were established by using medical Mimics software, and a screw path was designed with scanned data. Then the individualized thoracic pedicle screw placement guide plate models, matched to the backside of thoracic vertebral plates, were produced with a 3-D laser printer. Screws were placed with assistance of a guide plate. Then, the placement was assessed. RESULTS With the data provided by CT scans, 27 individualized guide plates were produced by 3-D printing. There was no significant difference in sex and relevant parameters of left and right sides among individuals (P>0.05). Screws were placed with assistance of guide plates, and all screws were in the correct positions without penetration of pedicles, under direct observation and anatomic evaluation post-operatively. CONCLUSIONS A thoracic pedicle screw placement guide plate can be produced by 3-D printing. With a high accuracy in placement and convenient operation, it provides a new method for accurate placement of thoracic pedicle screws. PMID:27194139

  11. Thoracic Pedicle Screw Placement Guide Plate Produced by Three-Dimensional (3-D) Laser Printing

    PubMed Central

    Chen, Hongliang; Guo, Kaijing; Yang, Huilin; Wu, Dongying; Yuan, Feng

    2016-01-01

    Background The aim of this study was to evaluate the accuracy and feasibility of an individualized thoracic pedicle screw placement guide plate produced by 3-D laser printing. Material/Methods Thoracic pedicle samples of 3 adult cadavers were randomly assigned for 3-D CT scans. The 3-D thoracic models were established by using medical Mimics software, and a screw path was designed with scanned data. Then the individualized thoracic pedicle screw placement guide plate models, matched to the backside of thoracic vertebral plates, were produced with a 3-D laser printer. Screws were placed with assistance of a guide plate. Then, the placement was assessed. Results With the data provided by CT scans, 27 individualized guide plates were produced by 3-D printing. There was no significant difference in sex and relevant parameters of left and right sides among individuals (P>0.05). Screws were placed with assistance of guide plates, and all screws were in the correct positions without penetration of pedicles, under direct observation and anatomic evaluation post-operatively. Conclusions A thoracic pedicle screw placement guide plate can be produced by 3-D printing. With a high accuracy in placement and convenient operation, it provides a new method for accurate placement of thoracic pedicle screws. PMID:27194139

  12. Registration of multi-view apical 3D echocardiography images

    NASA Astrophysics Data System (ADS)

    Mulder, H. W.; van Stralen, M.; van der Zwaan, H. B.; Leung, K. Y. E.; Bosch, J. G.; Pluim, J. P. W.

    2011-03-01

    Real-time three-dimensional echocardiography (RT3DE) is a non-invasive method to visualize the heart. Disadvantageously, it suffers from non-uniform image quality and a limited field of view. Image quality can be improved by fusion of multiple echocardiography images. Successful registration of the images is essential for prosperous fusion. Therefore, this study examines the performance of different methods for intrasubject registration of multi-view apical RT3DE images. A total of 14 data sets was annotated by two observers who indicated the position of the apex and four points on the mitral valve ring. These annotations were used to evaluate registration. Multi-view end-diastolic (ED) as well as end-systolic (ES) images were rigidly registered in a multi-resolution strategy. The performance of single-frame and multi-frame registration was examined. Multi-frame registration optimizes the metric for several time frames simultaneously. Furthermore, the suitability of mutual information (MI) as similarity measure was compared to normalized cross-correlation (NCC). For initialization of the registration, a transformation that describes the probe movement was obtained by manually registering five representative data sets. It was found that multi-frame registration can improve registration results with respect to single-frame registration. Additionally, NCC outperformed MI as similarity measure. If NCC was optimized in a multi-frame registration strategy including ED and ES time frames, the performance of the automatic method was comparable to that of manual registration. In conclusion, automatic registration of RT3DE images performs as good as manual registration. As registration precedes image fusion, this method can contribute to improved quality of echocardiography images.

  13. 3D registration through pseudo x-ray image generation.

    PubMed

    Domergue, G; Viant, W J

    2000-01-01

    One of the less effective processes within current Computer Assisted Surgery systems, utilizing pre-operative planning, is the registration of the plan with the intra-operative position of the patient. The technique described in this paper requires no digitisation of anatomical features or fiducial markers but instead relies on image matching between pseudo and real x-ray images generated by a virtual and a real image intensifier respectively. The technique is an extension to the work undertaken by Weese [1]. PMID:10977585

  14. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  15. Computer-generated hologram for 3D scene from multi-view images

    NASA Astrophysics Data System (ADS)

    Chang, Eun-Young; Kang, Yun-Suk; Moon, KyungAe; Ho, Yo-Sung; Kim, Jinwoong

    2013-05-01

    Recently, the computer generated hologram (CGH) calculated from real existing objects is more actively investigated to support holographic video and TV applications. In this paper, we propose a method of generating a hologram of the natural 3-D scene from multi-view images in order to provide motion parallax viewing with a suitable navigation range. After a unified 3-D point source set describing the captured 3-D scene is obtained from multi-view images, a hologram pattern supporting motion-parallax is calculated from the set using a point-based CGH method. We confirmed that 3-D scenes are faithfully reconstructed using numerical reconstruction.

  16. Real-time auto-stereoscopic visualization of 3D medical images

    NASA Astrophysics Data System (ADS)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  17. Online reconstruction of 3D magnetic particle imaging data

    NASA Astrophysics Data System (ADS)

    Knopp, T.; Hofmann, M.

    2016-06-01

    Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s‑1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.

  18. Online reconstruction of 3D magnetic particle imaging data.

    PubMed

    Knopp, T; Hofmann, M

    2016-06-01

    Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s(-1). However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time. PMID:27182668

  19. 3D pulmonary airway color image reconstruction via shape from shading and virtual bronchoscopy imaging techniques

    NASA Astrophysics Data System (ADS)

    Suter, Melissa; Reinhardt, Joseph M.; Hoffman, Eric A.; McLennan, Geoffrey

    2005-04-01

    The dependence on macro-optical imaging of the human body in the assessment of possible disease is rapidly increasing concurrent with, and as a direct result of, advancements made in medical imaging technologies. Assessing the pulmonary airways through bronchoscopy is performed extensively in clinical practice however remains highly subjective due to limited visualization techniques and the lack of quantitative analyses. The representation of 3D structures in 2D visualization modes, although providing an insight to the structural content of the scene, may in fact skew the perception of the structural form. We have developed two methods for visualizing the optically derived airway mucosal features whilst preserving the structural scene integrity. Shape from shading (SFS) techniques can be used to extract 3D structural information from 2D optical images. The SFS technique presented addresses many limitations previously encountered in conventional techniques resulting in high-resolution 3D color images. The second method presented to combine both color and structural information relies on combined CT and bronchoscopy imaging modalities. External imaging techniques such as CT provide a means of determining the gross structural anatomy of the pulmonary airways, however lack the important optically derived mucosal color. Virtual bronchoscopy is used to provide a direct link between the CT derived structural anatomy and the macro-optically derived mucosal color. Through utilization of a virtual and true bronchoscopy matching technique we are able to directly extract combined structurally sound 3D color segments of the pulmonary airways. Various pulmonary airway diseases are assessed and the resulting combined color and texture results are presented demonstrating the effectiveness of the presented techniques.

  20. Improved 3D cellular imaging by multispectral focus assessment

    NASA Astrophysics Data System (ADS)

    Zhao, Tong; Xiong, Yizhi; Chung, Alice P.; Wachman, Elliot S.; Farkas, Daniel L.

    2005-03-01

    Biological specimens are three-dimensional structures. However, when capturing their images through a microscope, there is only one plane in the field of view that is in focus, and out-of-focus portions of the specimen affect image quality in the in-focus plane. It is well-established that the microscope"s point spread function (PSF) can be used for blur quantitation, for the restoration of real images. However, this is an ill-posed problem, with no unique solution and with high computational complexity. In this work, instead of estimating and using the PSF, we studied focus quantitation in multi-spectral image sets. A gradient map we designed was used to evaluate the sharpness degree of each pixel, in order to identify blurred areas not to be considered. Experiments with realistic multi-spectral Pap smear images showed that measurement of their sharp gradients can provide depth information roughly comparable to human perception (through a microscope), while avoiding PSF estimation. Spectrum and morphometrics-based statistical analysis for abnormal cell detection can then be implemented in an image database where the axial structure has been refined.

  1. Laser nanolithography and chemical metalization for the manufacturing of 3D metallic interconnects

    NASA Astrophysics Data System (ADS)

    Jonavičius, Tomas; RekštytÄ--, Sima; Žukauskas, Albertas; Malinauskas, Mangirdas

    2014-03-01

    We present a developed method based on direct laser writing (DLW) and chemical metallization (CM) for microfabrication of three-dimensional (3D) metallic structures. Such approach enables manufacturing of free­-form electro conductive interconnects which can be used in integrated electric circuits such micro-opto-electro mechanical systems (MOEMS). The proposed technique employing ultrafast high repetition rate laser enables efficient fabrication of 3D microstructures on dielectric as well as conductive substrates. The produced polymer links out of organic-inorganic composite matrix after CM serve as interconnects of separate metallic contacts, their dimensions are: height 15μm, width 5μm, length 35-45 μm and could provide 300 nΩm resistivity measured in a macroscopic way. This proves the techniques potential for creating integrated 3D electric circuits at microscale.

  2. Generation of 3D ellipsoidal laser beams by means of a profiled volume chirped Bragg grating

    NASA Astrophysics Data System (ADS)

    Mironov, S. Yu; Poteomkin, A. K.; Gacheva, E. I.; Andrianov, A. V.; Zelenogorskii, V. V.; Vasiliev, R.; Smirnov, V.; Krasilnikov, M.; Stephan, F.; Khazanov, E. A.

    2016-05-01

    A method for shaping photocathode laser driver pulses into 3D ellipsoidal form has been proposed and implemented. The key idea of the method is to use a chirped Bragg grating recorded within the ellipsoid volume and absent outside it. If a beam with a constant (within the grating reflection band) spectral density and uniform (within the grating aperture) cross-section is incident on such a grating, the reflected beam will be a 3D ellipsoid in space and time. 3D ellipsoidal beams were obtained in experiment for the first time. It is expected that such laser beams will allow the electron bunch emittance to be reduced when applied at R± photo injectors.

  3. Model studies of blood flow in basilar artery with 3D laser Doppler anemometer

    NASA Astrophysics Data System (ADS)

    Frolov, S. V.; Sindeev, S. V.; Liepsch, D.; Balasso, A.; Proskurin, S. G.; Potlov, A. Y.

    2015-03-01

    It is proposed an integrated approach to the study of basilar artery blood flow using 3D laser Doppler anemometer for identifying the causes of the formation and development of cerebral aneurysms. Feature of the work is the combined usage of both mathematical modeling and experimental methods. Described the experimental setup and the method of measurement of basilar artery blood flow, carried out in an interdisciplinary laboratory of Hospital Rechts der Isar of Technical University of Munich. The experimental setup used to simulate the blood flow in the basilar artery and to measure blood flow characteristics using 3D laser Doppler anemometer (3D LDA). Described a method of numerical studies carried out in Tambov State Technical University and the Bakoulev Center for Cardiovascular Surgery. Proposed an approach for sharing experimental and numerical methods of research to identify the causes of the basilar artery aneurysms.

  4. Automated generation of NC part programs for excimer laser ablation micromachining from known 3D surfaces

    NASA Astrophysics Data System (ADS)

    Mutapcic, Emir; Iovenitti, Pio G.; Hayes, Jason P.

    2002-11-01

    The purpose of this research project is to improve the capability of the laser micromachinning process, so that any desired 3D surface can be produced by taking the 3D information from a CAD system and automatically generating the NC part programs. In addition, surface quality should be able to be controlled by specifying optimised parameters. This paper presents the algorithms and a software system, which processes 3D geometry in an STL file format from a CAD system and produces the NC part program to mill the surface using the Excimer laser ablation process. Simple structures are used to demonstrate the prototype system's part programming capabilities, and an actual surface is machined.

  5. 3D pulsed laser-triggered high-speed microfluidic fluorescence-activated cell sorter.

    PubMed

    Chen, Yue; Wu, Ting-Hsiang; Kung, Yu-Chun; Teitell, Michael A; Chiou, Pei-Yu

    2013-11-12

    We report a 3D microfluidic pulsed laser-triggered fluorescence-activated cell sorter capable of sorting at a throughput of 23 000 cells per s with 90% purity in high-purity mode and at a throughput of 45 000 cells per s with 45% purity in enrichment mode in one stage and in a single channel. This performance is realized by exciting laser-induced cavitation bubbles in a 3D PDMS microfluidic channel to generate high-speed liquid jets that deflect detected fluorescent cells and particles focused by 3D sheath flows. The ultrafast switching mechanism (20 μs complete on-off cycle), small liquid jet perturbation volume, and three-dimensional sheath flow focusing for accurate timing control of fast (1.5 m s(-1)) passing cells and particles are three critical factors enabling high-purity sorting at high-throughput in this sorter. PMID:23844418

  6. Laser micromachining of through via interconnects in active die for 3-D multichip module

    SciTech Connect

    Chu, D.; Miller, W.D.

    1995-09-01

    One method to increase density in integrated circuits (IC) is to stack die to create a 3-D multichip module (MCM). In the past, special post wafer processing was done to bring interconnects out to the edge of the die. The die were sawed, glued, and stacked. Special processing was done to create interconnects on the edge to provide for interconnects to each of the die. These processes require an IC type fabrication facility (fab) and special processing equipment. In contrast, we have developed packaging assembly methods to created vertical through vias in bond pads of active silicon die, isolate these vias, and metal fill these vias without the use of a special IC fab. These die with through vias can then be joined and stacked to create a 3-D MCM. Vertical through vias in active die are created by laser micromachining using a Nd:YAG laser. Besides the fundamental 1064 nm (infra-red) laser wavelength of a Nd:YAG laser, modifications to our Nd:YAG laser allowed us to generate the second harmonic 532 nm (green) laser wavelength and fourth harmonic 266nm (ultra violet) laser wavelength in laser micromachining for these vias. Experiments were conducted to determine the best laser wavelengths to use for laser micromachining of vertical through vias in order to minimize damage to the active die. Via isolation experiments were done in order to determine the best method in isolating the bond pads of the die. Die thinning techniques were developed to allow for die thickness as thin as 50 {mu}m. This would allow for high 3-D density when the die are stacked. A method was developed to metal fill the vias with solder using a wire bonder with solder wire.

  7. Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.

    PubMed

    Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed

    2009-06-01

    Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm. PMID:19380272

  8. Interferometric synthetic aperture radar detection and estimation based 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Austin, Christian D.; Moses, Randolph L.

    2006-05-01

    This paper explores three-dimensional (3D) interferometric synthetic aperture radar (IFSAR) image reconstruction when multiple scattering centers and noise are present in a radar resolution cell. We introduce an IFSAR scattering model that accounts for both multiple scattering centers and noise. The problem of 3D image reconstruction is then posed as a multiple hypothesis detection and estimation problem; resolution cells containing a single scattering center are detected and the 3D location of these cells' pixels are estimated; all other pixels are rejected from the image. Detection and estimation statistics are derived using the multiple scattering center IFSAR model. A 3D image reconstruction algorithm using these statistics is then presented, and its performance is evaluated for a 3D reconstruction of a backhoe from noisy IFSAR data.

  9. Remote laboratory for phase-aided 3D microscopic imaging and metrology

    NASA Astrophysics Data System (ADS)

    Wang, Meng; Yin, Yongkai; Liu, Zeyi; He, Wenqi; Li, Boqun; Peng, Xiang

    2014-05-01

    In this paper, the establishment of a remote laboratory for phase-aided 3D microscopic imaging and metrology is presented. Proposed remote laboratory consists of three major components, including the network-based infrastructure for remote control and data management, the identity verification scheme for user authentication and management, and the local experimental system for phase-aided 3D microscopic imaging and metrology. The virtual network computer (VNC) is introduced to remotely control the 3D microscopic imaging system. Data storage and management are handled through the open source project eSciDoc. Considering the security of remote laboratory, the fingerprint is used for authentication with an optical joint transform correlation (JTC) system. The phase-aided fringe projection 3D microscope (FP-3DM), which can be remotely controlled, is employed to achieve the 3D imaging and metrology of micro objects.

  10. Imaging of human differentiated 3D neural aggregates using light sheet fluorescence microscopy

    PubMed Central

    Gualda, Emilio J.; Simão, Daniel; Pinto, Catarina; Alves, Paula M.; Brito, Catarina

    2014-01-01

    The development of three dimensional (3D) cell cultures represents a big step for the better understanding of cell behavior and disease in a more natural like environment, providing not only single but multiple cell type interactions in a complex 3D matrix, highly resembling physiological conditions. Light sheet fluorescence microscopy (LSFM) is becoming an excellent tool for fast imaging of such 3D biological structures. We demonstrate the potential of this technique for the imaging of human differentiated 3D neural aggregates in fixed and live samples, namely calcium imaging and cell death processes, showing the power of imaging modality compared with traditional microscopy. The combination of light sheet microscopy and 3D neural cultures will open the door to more challenging experiments involving drug testing at large scale as well as a better understanding of relevant biological processes in a more realistic environment. PMID:25161607

  11. 3D digital breast tomosynthesis image reconstruction using anisotropic total variation minimization.

    PubMed

    Seyyedi, Saeed; Yildirim, Isa

    2014-01-01

    This paper presents a compressed sensing based reconstruction method for 3D digital breast tomosynthesis (DBT) imaging. Algebraic reconstruction technique (ART) has been in use in DBT imaging by minimizing the isotropic total variation (TV) of the reconstructed image. The resolution in DBT differs in sagittal and axial directions which should be encountered during the TV minimization. In this study we develop a 3D anisotropic TV (ATV) minimization by considering the different resolutions in different directions. A customized 3D Shepp-logan phantom was generated to mimic a real DBT image by considering the overlapping tissue and directional resolution issues. Results of the ART, ART+3D TV and ART+3D ATV are compared using structural similarity (SSIM) diagram. PMID:25571377

  12. 3D Laser Scanning Modeling and Application on Dazu Thousand-hand Bodhisattva in China

    NASA Astrophysics Data System (ADS)

    Hou, M.; Zhang, X.; Wu, Y.; Hu, Y.

    2014-04-01

    The Dazu Thousand-hand Bodhisattva Statue is located at Baoding Mountain in Chongqing. It has the reputation as "the Gem of World's Rock Carving Art". At present,the Dazu Thousand-hand Bodhisattva Statue is basically well conserved, while the local damage is already very serious. However, the Dazu Thousand-hand Bodhisattva Statue is a three-dimensional caved statue, the present plane surveying and mapping device cannot reflect the preservation situation completely. Therefore, the documentation of the Dazu Thousand-hand Bodhisattva Statue using terrestrial laser scanning is of great significance. This paper will introduce a new method for superfine 3D modeling of Thousand-hand Bodhisattva based on the high-resolution 3D cloud points. By analyzing these 3D cloud points and 3D models, some useful information, such as several 3D statistics, 3D thematic map and 3D shape restoration suggestion of Thousand-hand Bodhisattva will be revealed, which are beneficial to restoration work and some other application.

  13. Experimental investigation of 3D scanheads for laser micro-processing

    NASA Astrophysics Data System (ADS)

    Penchev, Pavel; Dimov, Stefan; Bhaduri, Debajyoti

    2016-07-01

    The broader use of laser micro-processing technology increases the demand for executing complex machining and joining operations on free-from (3D) workpieces. To satisfy these growing requirements it is necessary to utilise 3D scanheads that integrate beam deflectors (X and Y optical axes) and Z modules with high dynamics. The research presented in this communication proposes an experimental technique to quantify the dynamic capabilities of Z modules, also called Dynamic Focusing Modules (DFM), of such 3D scanheads that are essential for efficient, accurate and repeatable laser micro-processing of free form surfaces. The proposed experimental technique is validated on state-of-art laser micro-machining platform and the results show that the DFM dynamic capabilities are substantially inferior than those of X and Y beam deflectors, in particular the maximum speed of the Z module is less than 10% of the maximum speeds achievable with X and Y optical axes of the scanhead. Thus, the DFM dynamics deficiencies can become a major obstacle for the broader use of high frequency laser sources that necessitate high dynamics 3D scanheads for executing cost effectively free-form surface processing operations.

  14. Image quality of a cone beam O-arm 3D imaging system

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

    2009-02-01

    The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (512×512×0.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

  15. 3D-2D registration of cerebral angiograms: a method and evaluation on clinical images.

    PubMed

    Mitrovic, Uroš; Špiclin, Žiga; Likar, Boštjan; Pernuš, Franjo

    2013-08-01

    Endovascular image-guided interventions (EIGI) involve navigation of a catheter through the vasculature followed by application of treatment at the site of anomaly using live 2D projection images for guidance. 3D images acquired prior to EIGI are used to quantify the vascular anomaly and plan the intervention. If fused with the information of live 2D images they can also facilitate navigation and treatment. For this purpose 3D-2D image registration is required. Although several 3D-2D registration methods for EIGI achieve registration accuracy below 1 mm, their clinical application is still limited by insufficient robustness or reliability. In this paper, we propose a 3D-2D registration method based on matching a 3D vasculature model to intensity gradients of live 2D images. To objectively validate 3D-2D registration methods, we acquired a clinical image database of 10 patients undergoing cerebral EIGI and established "gold standard" registrations by aligning fiducial markers in 3D and 2D images. The proposed method had mean registration accuracy below 0.65 mm, which was comparable to tested state-of-the-art methods, and execution time below 1 s. With the highest rate of successful registrations and the highest capture range the proposed method was the most robust and thus a good candidate for application in EIGI. PMID:23649179

  16. A Conceptual Design For A Spaceborne 3D Imaging Lidar

    NASA Technical Reports Server (NTRS)

    Degnan, John J.; Smith, David E. (Technical Monitor)

    2002-01-01

    First generation spaceborne altimetric approaches are not well-suited to generating the few meter level horizontal resolution and decimeter accuracy vertical (range) resolution on the global scale desired by many in the Earth and planetary science communities. The present paper discusses the major technological impediments to achieving few meter transverse resolutions globally using conventional approaches and offers a feasible conceptual design which utilizes modest power kHz rate lasers, array detectors, photon-counting multi-channel timing receivers, and dual wedge optical scanners with transmitter point-ahead correction.

  17. Space Radar Image Isla Isabela in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional view of Isabela, one of the Galapagos Islands located off the western coast of Ecuador, South America. This view was constructed by overlaying a Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) image on a digital elevation map produced by TOPSAR, a prototype airborne interferometric radar which produces simultaneous image and elevation data. The vertical scale in this image is exaggerated by a factor of 1.87. The SIR-C/X-SAR image was taken on the 40th orbit of space shuttle Endeavour. The image is centered at about 0.5 degree south latitude and 91 degrees west longitude and covers an area of 75 by 60 kilometers (47 by 37 miles). The radar incidence angle at the center of the image is about 20 degrees. The western Galapagos Islands, which lie about 1,200 kilometers (750 miles)west of Ecuador in the eastern Pacific, have six active volcanoes similar to the volcanoes found in Hawaii and reflect the volcanic processes that occur where the ocean floor is created. Since the time of Charles Darwin's visit to the area in 1835, there have been more than 60 recorded eruptions on these volcanoes. This SIR-C/X-SAR image of Alcedo and Sierra Negra volcanoes shows the rougher lava flows as bright features, while ash deposits and smooth pahoehoe lava flows appear dark. Vertical exaggeration of relief is a common tool scientists use to detect relationships between structure (for example, faults, and fractures) and topography. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data

  18. Radar Imaging of Spheres in 3D using MUSIC

    SciTech Connect

    Chambers, D H; Berryman, J G

    2003-01-21

    We have shown that multiple spheres can be imaged by linear and planar EM arrays using only one component of polarization. The imaging approach involves calculating the SVD of the scattering response matrix, selecting a subset of singular values that represents noise, and evaluating the MUSIC functional. The noise threshold applied to the spectrum of singular values for optimal performance is typically around 1%. The resulting signal subspace includes more than one singular value per sphere. The presence of reflections from the ground improves height localization, even for a linear array parallel to the ground. However, the interference between direct and reflected energy modulates the field, creating periodic nulls that can obscure targets in typical images. These nulls are largely eliminated by normalizing the MUSIC functional with the broadside beam pattern of the array. The resulting images show excellent localization for 1 and 2 spheres. The performance for the 3 sphere configurations are complicated by shadowing effects and the greater range of the 3rd sphere in case 2. Two of the three spheres are easily located by MUSIC but the third is difficult to distinguish from other local maxima of the complex imaging functional. Improvement is seen when the linear array is replace with a planar array, which increases the effective aperture height. Further analysis of the singular values and their relationship to modes of scattering from the spheres, as well as better ways to exploit polarization, should improve performance. Work along these lines is currently being pursued by the authors.

  19. Multithreaded real-time 3D image processing software architecture and implementation

    NASA Astrophysics Data System (ADS)

    Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

    2011-03-01

    Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

  20. 3D geometric modeling and simulation of laser propagation through turbulence with plenoptic functions

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Nelson, William; Davis, Christopher C.

    2014-10-01

    Plenoptic functions are functions that preserve all the necessary light field information of optical events. Theoretical work has demonstrated that geometric based plenoptic functions can serve equally well in the traditional wave propagation equation known as the "scalar stochastic Helmholtz equation". However, in addressing problems of 3D turbulence simulation, the dominant methods using phase screen models have limitations both in explaining the choice of parameters (on the transverse plane) in real-world measurements, and finding proper correlations between neighboring phase screens (the Markov assumption breaks down). Though possible corrections to phase screen models are still promising, the equivalent geometric approach based on plenoptic functions begins to show some advantages. In fact, in these geometric approaches, a continuous wave problem is reduced to discrete trajectories of rays. This allows for convenience in parallel computing and guarantees conservation of energy. Besides the pairwise independence of simulated rays, the assigned refractive index grids can be directly tested by temperature measurements with tiny thermoprobes combined with other parameters such as humidity level and wind speed. Furthermore, without loss of generality one can break the causal chain in phase screen models by defining regional refractive centers to allow rays that are less affected to propagate through directly. As a result, our work shows that the 3D geometric approach serves as an efficient and accurate method in assessing relevant turbulence problems with inputs of several environmental measurements and reasonable guesses (such as Cn 2 levels). This approach will facilitate analysis and possible corrections in lateral wave propagation problems, such as image de-blurring, prediction of laser propagation over long ranges, and improvement of free space optic communication systems. In this paper, the plenoptic function model and relevant parallel algorithm computing

  1. 3-D capacitance density imaging of fluidized bed

    DOEpatents

    Fasching, George E.

    1990-01-01

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved.

  2. An Image-Based Technique for 3d Building Reconstruction Using Multi-View Uav Images

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2015-12-01

    Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs) images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM) is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.

  3. Visualization of 3D images from multiple texel images created from fused LADAR/digital imagery

    NASA Astrophysics Data System (ADS)

    Killpack, Cody C.; Budge, Scott E.

    2015-05-01

    The ability to create 3D models, using registered texel images (fused ladar and digital imagery), is an important topic in remote sensing. These models are automatically generated by matching multiple texel images into a single common reference frame. However, rendering a sequence of independently registered texel images often provides challenges. Although accurately registered, the model textures are often incorrectly overlapped and interwoven when using standard rendering techniques. Consequently, corrections must be done after all the primitives have been rendered, by determining the best texture for any viewable fragment in the model. Determining the best texture is difficult, as each texel image remains independent after registration. The depth data is not merged to form a single 3D mesh, thus eliminating the possibility of generating a fused texture atlas. It is therefore necessary to determine which textures are overlapping and how to best combine them dynamically during the render process. The best texture for a particular pixel can be defined using 3D geometric criteria, in conjunction with a real-time, view-dependent ranking algorithm. As a result, overlapping texture fragments can now be hidden, exposed, or blended according to their computed measure of reliability.

  4. A featureless approach to 3D polyhedral building modeling from aerial images.

    PubMed

    Hammoudi, Karim; Dornaika, Fadi

    2011-01-01

    This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach. PMID:22346575

  5. A Featureless Approach to 3D Polyhedral Building Modeling from Aerial Images

    PubMed Central

    Hammoudi, Karim; Dornaika, Fadi

    2011-01-01

    This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach. PMID:22346575

  6. Rapid 3D video/laser sensing and digital archiving with immediate on-scene feedback for 3D crime scene/mass disaster data collection and reconstruction

    NASA Astrophysics Data System (ADS)

    Altschuler, Bruce R.; Oliver, William R.; Altschuler, Martin D.

    1996-02-01

    We describe a system for rapid and convenient video data acquisition and 3-D numerical coordinate data calculation able to provide precise 3-D topographical maps and 3-D archival data sufficient to reconstruct a 3-D virtual reality display of a crime scene or mass disaster area. Under a joint U.S. army/U.S. Air Force project with collateral U.S. Navy support, to create a 3-D surgical robotic inspection device -- a mobile, multi-sensor robotic surgical assistant to aid the surgeon in diagnosis, continual surveillance of patient condition, and robotic surgical telemedicine of combat casualties -- the technology is being perfected for remote, non-destructive, quantitative 3-D mapping of objects of varied sizes. This technology is being advanced with hyper-speed parallel video technology and compact, very fast laser electro-optics, such that the acquisition of 3-D surface map data will shortly be acquired within the time frame of conventional 2-D video. With simple field-capable calibration, and mobile or portable platforms, the crime scene investigator could set up and survey the entire crime scene, or portions of it at high resolution, with almost the simplicity and speed of video or still photography. The survey apparatus would record relative position, location, and instantly archive thousands of artifacts at the site with 3-D data points capable of creating unbiased virtual reality reconstructions, or actual physical replicas, for the investigators, prosecutors, and jury.

  7. 3D spectral imaging system for anterior chamber metrology

    NASA Astrophysics Data System (ADS)

    Anderson, Trevor; Segref, Armin; Frisken, Grant; Frisken, Steven

    2015-03-01

    Accurate metrology of the anterior chamber of the eye is useful for a number of diagnostic and clinical applications. In particular, accurate corneal topography and corneal thickness data is desirable for fitting contact lenses, screening for diseases and monitoring corneal changes. Anterior OCT systems can be used to measure anterior chamber surfaces, however accurate curvature measurements for single point scanning systems are known to be very sensitive to patient movement. To overcome this problem we have developed a parallel 3D spectral metrology system that captures simultaneous A-scans on a 2D lateral grid. This approach enables estimates of the elevation and curvature of anterior and posterior corneal surfaces that are robust to sample movement. Furthermore, multiple simultaneous surface measurements greatly improve the ability to register consecutive frames and enable aggregate measurements over a finer lateral grid. A key element of our approach has been to exploit standard low cost optical components including lenslet arrays and a 2D sensor to provide a path towards low cost implementation. We demonstrate first prototypes based on 6 Mpixel sensor using a 250 μm pitch lenslet array with 300 sample beams to achieve an RMS elevation accuracy of 1μm with 95 dB sensitivity and a 7.0 mm range. Initial tests on Porcine eyes, model eyes and calibration spheres demonstrate the validity of the concept. With the next iteration of designs we expect to be able to achieve over 1000 simultaneous A-scans in excess of 75 frames per second.

  8. Space Radar Image of Kilauea, Hawaii in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional perspective view of a false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies -- X-band, C-band and L-band -- from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying on the space shuttle Endeavour, overlaid on a U.S. Geological Survey digital elevation map. Visible in the center of the image in blue are the summit crater (Kilauea Caldera) which contains the smaller Halemaumau Crater, and the line of collapse craters below them that form the Chain of Craters Road. The image was acquired on April 12, 1994 during orbit 52 of the space shuttle. The area shown is approximately 34 by 57 kilometers (21 by 35 miles) with the top of the image pointing toward northwest. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. The false colors are created by displaying three radar channels of different frequency. Red areas correspond to high backscatter at L-HV polarization, while green areas exhibit high backscatter at C-HV polarization. Finally, blue shows high return at X-VV polarization. Using this color scheme, the rain forest appears bright on the image, while the green areas correspond to lower vegetation. The lava flows have different colors depending on their types and are easily recognizable due to their shapes. The flows at the top of the image originated from the Mauna Loa volcano. Kilauea volcano has been almost continuously active for more than the last 11 years. Field teams that were on the ground specifically to support these radar observations report that there was vigorous surface activity about 400 meters (one-quartermile) inland from the coast. A moving lava flow about 200 meters (650 feet) in length was observed at the time of the shuttle overflight, raising the possibility that subsequent images taken during this mission will show changes in the landscape. Currently, most of the lava that is

  9. Space Radar Image of Kilauea, Hawaii in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional perspective view of a false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies -- X-band, C-band and L-band -- from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying on the space shuttle Endeavour, overlaid on a U.S. Geological Survey digital elevation map. Visible in the center of the image in blue are the summit crater (Kilauea Caldera) which contains the smaller Halemaumau Crater, and the line of collapse craters below them that form the Chain of Craters Road. The image was acquired on April 12, 1994 during orbit 52 of the space shuttle. The area shown is approximately 34 by 57 kilometers (21 by 35 miles) with the top of the image pointing toward northwest. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. The false colors are created by displaying three radar channels of different frequency. Red areas correspond to high backscatter at L-HV polarization, while green areas exhibit high backscatter at C-HV polarization. Finally, blue shows high return at X-VV polarization. Using this color scheme, the rain forest appears bright on the image, while the green areas correspond to lower vegetation. The lava flows have different colors depending on their types and are easily recognizable due to their shapes. The flows at the top of the image originated from the Mauna Loa volcano. Kilauea volcano has been almost continuously active for more than the last 11 years. Field teams that were on the ground specifically to support these radar observations report that there was vigorous surface activity about 400 meters (one-quartermile) inland from the coast. A moving lava flow about 200 meters (650 feet) in length was observed at the time of the shuttle overflight, raising the possibility that subsequent images taken during this mission will show changes in the landscape. Currently, most of the lava that is

  10. Registration and 3D visualization of large microscopy images

    NASA Astrophysics Data System (ADS)

    Mosaliganti, Kishore; Pan, Tony; Sharp, Richard; Ridgway, Randall; Iyengar, Srivathsan; Gulacy, Alexandra; Wenzel, Pamela; de Bruin, Alain; Machiraju, Raghu; Huang, Kun; Leone, Gustavo; Saltz, Joel

    2006-03-01

    Inactivation of the retinoblastoma gene in mouse embryos causes tissue infiltrations into critical sections of the placenta, which has been shown to affect fetal survivability. Our collaborators in cancer genetics are extremely interested in examining the three dimensional nature of these infiltrations given a stack of two dimensional light microscopy images. Three sets of wildtype and mutant placentas was sectioned serially and digitized using a commercial light microscopy scanner. Each individual placenta dataset consisted of approximately 1000 images totaling 700 GB in size, which were registered into a volumetric dataset using National Library of Medicine's (NIH/NLM) Insight Segmentation and Registration Toolkit (ITK). This paper describes our method for image registration to aid in volume visualization of tissue level intermixing for both wildtype and Rb - specimens. The registration process faces many challenges arising from the large image sizes, damages during sectioning, staining gradients both within and across sections, and background noise. These issues limit the direct application of standard registration techniques due to frequent convergence to local solutions. In this work, we develop a mixture of automated and semi-automated enhancements with ground-truth validation for the mutual information-based registration algorithm. Our final volume renderings clearly show tissue intermixing differences between both wildtype and Rb - specimens which are not obvious prior to registration.

  11. Determining 3-D motion and structure from image sequences

    NASA Technical Reports Server (NTRS)

    Huang, T. S.

    1982-01-01

    A method of determining three-dimensional motion and structure from two image frames is presented. The method requires eight point correspondences between the two frames, from which motion and structure parameters are determined by solving a set of eight linear equations and a singular value decomposition of a 3x3 matrix. It is shown that the solution thus obtained is unique.

  12. Task-specific evaluation of 3D image interpolation techniques

    NASA Astrophysics Data System (ADS)

    Grevera, George J.; Udupa, Jayaram K.; Miki, Yukio

    1998-06-01

    Image interpolation is an important operation that is widely used in medical imaging, image processing, and computer graphics. A variety of interpolation methods are available in the literature. However, their systematic evaluation is lacking. At a previous meeting, we presented a framework for the task independent comparison of interpolation methods based on a variety of medical image data pertaining to different parts of the human body taken from different modalities. In this new work, we present an objective, task-specific framework for evaluating interpolation techniques. The task considered is how the interpolation methods influence the accuracy of quantification of the total volume of lesions in the brain of Multiple Sclerosis (MS) patients. Sixty lesion detection experiments coming from ten patient studies, two subsampling techniques and the original data, and 3 interpolation methods is presented along with a statistical analysis of the results. This work comprises a systematic framework for the task-specific comparison of interpolation methods. Specifically, the influence of three interpolation methods in MS lesion quantification is compared.

  13. A 3-D fluorescence imaging system incorporating structured illumination technology

    NASA Astrophysics Data System (ADS)

    Antos, L.; Emord, P.; Luquette, B.; McGee, B.; Nguyen, D.; Phipps, A.; Phillips, D.; Helguera, M.

    2010-02-01

    A currently available 2-D high-resolution, optical molecular imaging system was modified by the addition of a structured illumination source, OptigridTM, to investigate the feasibility of providing depth resolution along the optical axis. The modification involved the insertion of the OptigridTM and a lens in the path between the light source and the image plane, as well as control and signal processing software. Projection of the OptigridTM onto the imaging surface at an angle, was resolved applying the Scheimpflug principle. The illumination system implements modulation of the light source and provides a framework for capturing depth resolved mages. The system is capable of in-focus projection of the OptigridTM at different spatial frequencies, and supports the use of different lenses. A calibration process was developed for the system to achieve consistent phase shifts of the OptigridTM. Post-processing extracted depth information using depth modulation analysis using a phantom block with fluorescent sheets at different depths. An important aspect of this effort was that it was carried out by a multidisciplinary team of engineering and science students as part of a capstone senior design program. The disciplines represented are mechanical engineering, electrical engineering and imaging science. The project was sponsored by a financial grant from New York State with equipment support from two industrial concerns. The students were provided with a basic imaging concept and charged with developing, implementing, testing and validating a feasible proof-of-concept prototype system that was returned to the originator of the concept for further evaluation and characterization.

  14. Fusion of Terrestrial and Airborne Laser Data for 3D modeling Applications

    NASA Astrophysics Data System (ADS)

    Mohammed, Hani Mahmoud

    This thesis deals with the 3D modeling phase of the as-built large BIM projects. Among several means of BIM data capturing, such as photogrammetric or range tools, laser scanners have been one of the most efficient and practical tool for a long time. They can generate point clouds with high resolution for 3D models that meet nowadays' market demands. The current 3D modeling projects of as-built BIMs are mainly focused on using one type of laser scanner data, such as Airborne or Terrestrial. According to the literatures, no significant (few) efforts were made towards the fusion of heterogeneous laser scanner data despite its importance. The importance of the fusion of heterogeneous data arises from the fact that no single type of laser data can provide all the information about BIM, especially for large BIM projects that are existing on a large area, such as university buildings, or Heritage places. Terrestrial laser scanners are able to map facades of buildings and other terrestrial objects. However, they lack the ability to map roofs or higher parts in the BIM project. Airborne laser scanner on the other hand, can map roofs of the buildings efficiently and can map only small part of the facades. Short range laser scanners can map the interiors of the BIM projects, while long range scanners are used for mapping wide exterior areas in BIM projects. In this thesis the long range laser scanner data obtained in the Stop-and-Go mapping mode, the short range laser scanner data, obtained in a fully static mapping mode, and the airborne laser data are all fused together to bring a complete effective solution for a large BIM project. Working towards the 3D modeling of BIM projects, the thesis framework starts with the registration of the data, where a new fast automatic registration algorithm were developed. The next step is to recognize the different objects in the BIM project (classification), and obtain 3D models for the buildings. The last step is the development of an

  15. Observing molecular dynamics with time-resolved 3D momentum imaging

    NASA Astrophysics Data System (ADS)

    Sturm, F. P.; Wright, T.; Bocharova, I.; Ray, D.; Shivaram, N.; Cryan, J.; Belkacem, A.; Weber, T.; Dörner, R.

    2014-05-01

    Photo-excitation and ionization trigger rich dynamics in molecular systems which play a key role in many important processes in nature such as vision, photosynthesis or photoprotection. Observing those reactions in real-time without significantly disturbing the molecules by a strong electric field has been a great challenge. Recent experiments using Time-of-Flight and Velocity Map Imaging techniques have revealed important information on the dynamics of small molecular systems upon photo-excitation. We have developed an apparatus for time-resolved momentum imaging of electrons and ions in all three spatial dimensions that employs two-color femtosecond laser pulses in the vacuum and extreme ultraviolet (VUV, XUV) for probing molecular dynamics. Our COLTRIMS style reaction microscope can measure electrons and ions in coincidence and reconstruct the momenta of the reaction fragments in 3D. We use a high power 800 nm laser in a loose focusing geometry gas cell to efficinetly drive High Harmonic Generation. The resulting photon flux is sufficient to perform 2-photon pump-probe experiments using VUV and XUV pulses for both pump and probe. With this setup we investigate non-Born-Oppenheimer dynamics in small molecules such as C2H4 and CO2 on a femtosecond time scale. Supported by Chemical Sciences, Geosciences and Biosciences division of BES/DOE.

  16. An evaluation of cine-mode 3D portal image dosimetry for Volumetric Modulated Arc Therapy

    NASA Astrophysics Data System (ADS)

    Ansbacher, W.; Swift, C.-L.; Greer, P. B.

    2010-11-01

    We investigated cine-mode portal imaging on a Varian Trilogy accelerator and found that the linearity and other dosimetric properties are sufficient for 3D dose reconstruction as used in patient-specific quality assurance for VMAT (RapidArc) treatments. We also evaluated the gantry angle label in the portal image file header as a surrogate for the true imaged angle. The precision is only just adequate for the 3D evaluation method chosen, as discrepancies of 2° were observed.

  17. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  18. In Vivo 3D Meibography of the Human Eyelid Using Real Time Imaging Fourier-Domain OCT

    PubMed Central

    Hwang, Ho Sik; Shin, Jun Geun; Lee, Byeong Ha; Eom, Tae Joong; Joo, Choun-Ki

    2013-01-01

    Recently, we reported obtaining tomograms of meibomian glands from healthy volunteers using commercial anterior segment optical coherence tomography (AS-OCT), which is widely employed in clinics for examination of the anterior segment. However, we could not create 3D images of the meibomian glands, because the commercial OCT does not have a 3D reconstruction function. In this study we report the creation of 3D images of the meibomian glands by reconstructing the tomograms of these glands using high speed Fourier-Domain OCT (FD-OCT) developed in our laboratory. This research was jointly undertaken at the Department of Ophthalmology, Seoul St. Mary's Hospital (Seoul, Korea) and the Advanced Photonics Research Institute of Gwangju Institute of Science and Technology (Gwangju, Korea) with two healthy volunteers and seven patients with meibomian gland dysfunction. A real time imaging FD-OCT system based on a high-speed wavelength swept laser was developed that had a spectral bandwidth of 100 nm at the 1310 nm center wavelength. The axial resolution was 5 µm and the lateral resolution was 13 µm in air. Using this device, the meibomian glands of nine subjects were examined. A series of tomograms from the upper eyelid measuring 5 mm (from left to right, B-scan) × 2 mm (from upper part to lower part, C-scan) were collected. Three-D images of the meibomian glands were then reconstructed using 3D “data visualization, analysis, and modeling software”. Established infrared meibography was also performed for comparison. The 3D images of healthy subjects clearly showed the meibomian glands, which looked similar to bunches of grapes. These results were consistent with previous infrared meibography results. The meibomian glands were parallel to each other, and the saccular acini were clearly visible. Here we report the successful production of 3D images of human meibomian glands by reconstructing tomograms of these glands with high speed FD-OCT. PMID:23805297

  19. Space Radar Image of Missoula, Montana in 3-D

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-dimensional perspective view of Missoula, Montana, created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this are useful because they show scientists the shapes of the topographic features such as mountains and valleys. This technique helps to clarify the relationships of the different types of materials on the surface detected by the radar. The view is looking north-northeast. The blue circular area at the lower left corner is a bend of the Bitterroot River just before it joins the Clark Fork, which runs through the city. Crossing the Bitterroot River is the bridge of U.S. Highway 93. Highest mountains in this image are at elevations of 2,200 meters (7,200 feet). The city is about 975 meters (3,200 feet) above sea level. The bright yellow areas are urban and suburban zones, dark brown and blue-green areas are grasslands, bright green areas are farms, light brown and purple areas are scrub and forest, and bright white and blue areas are steep rocky slopes. The two radar images were taken on successive days by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) onboard the space shuttle Endeavour in October 1994. The digital elevation map was produced using radar interferometry, a process in which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. Radar image data are draped over the topography to provide the color with the following assignments: red is L-band vertically transmitted, vertically received; green is C-band vertically transmitted, vertically received; and blue are differences seen in the L-band data between the two days. This image is centered near 46.9 degrees north latitude and 114.1 degrees west longitude. No vertical exaggeration factor has been applied to the data. SIR-C/X-SAR, a joint mission of the German, Italian and United States space agencies, is part of NASA

  20. Space Radar Image of Long Valley, California - 3D view

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-dimensional perspective view of Long Valley, California by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This view was constructed by overlaying a color composite SIR-C image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle and, which then, are compared to obtain elevation information. The data were acquired on April 13, 1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR radar instrument. The color composite radar image was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is off the image to the left. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory

  1. Space Radar Image of Long Valley, California in 3-D

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This three-dimensional perspective view of Long Valley, California was created from data taken by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This image was constructed by overlaying a color composite SIR-C radar image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. The interferometry data were acquired on April 13,1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR instrument. The color composite radar image was taken in October and was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is the large dark feature in the foreground. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are

  2. Space Radar Image of Karakax Valley, China 3-D

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This three-dimensional perspective of the remote Karakax Valley in the northern Tibetan Plateau of western China was created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this are helpful to scientists because they reveal where the slopes of the valley are cut by erosion, as well as the accumulations of gravel deposits at the base of the mountains. These gravel deposits, called alluvial fans, are a common landform in desert regions that scientists are mapping in order to learn more about Earth's past climate changes. Higher up the valley side is a clear break in the slope, running straight, just below the ridge line. This is the trace of the Altyn Tagh fault, which is much longer than California's San Andreas fault. Geophysicists are studying this fault for clues it may be able to give them about large faults. Elevations range from 4000 m (13,100 ft) in the valley to over 6000 m (19,700 ft) at the peaks of the glaciated Kun Lun mountains running from the front right towards the back. Scale varies in this perspective view, but the area is about 20 km (12 miles) wide in the middle of the image, and there is no vertical exaggeration. The two radar images were acquired on separate days during the second flight of the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeavour in October 1994. The interferometry technique provides elevation measurements of all points in the scene. The resulting digital topographic map was used to create this view, looking northwest from high over the valley. Variations in the colors can be related to gravel, sand and rock outcrops. This image is centered at 36.1 degrees north latitude, 79.2 degrees east longitude. Radar image data are draped over the topography to provide the color with the following assignments: Red is L-band vertically transmitted, vertically received; green is the average of L-band vertically transmitted

  3. Recovering 3D tumor locations from 2D bioluminescence images and registration with CT images

    NASA Astrophysics Data System (ADS)

    Huang, Xiaolei; Metaxas, Dimitris N.; Menon, Lata G.; Mayer-Kuckuk, Philipp; Bertino, Joseph R.; Banerjee, Debabrata

    2006-02-01

    In this paper, we introduce a novel and efficient algorithm for reconstructing the 3D locations of tumor sites from a set of 2D bioluminescence images which are taken by a same camera but after continually rotating the object by a small angle. Our approach requires a much simpler set up than those using multiple cameras, and the algorithmic steps in our framework are efficient and robust enough to facilitate its use in analyzing the repeated imaging of a same animal transplanted with gene marked cells. In order to visualize in 3D the structure of the tumor, we also co-register the BLI-reconstructed crude structure with detailed anatomical structure extracted from high-resolution microCT on a single platform. We present our method using both phantom studies and real studies on small animals.

  4. Determining 3D Flow Fields via Multi-camera Light Field Imaging