Science.gov

Sample records for 3d imaging technologies

  1. A 3-D fluorescence imaging system incorporating structured illumination technology

    NASA Astrophysics Data System (ADS)

    Antos, L.; Emord, P.; Luquette, B.; McGee, B.; Nguyen, D.; Phipps, A.; Phillips, D.; Helguera, M.

    2010-02-01

    A currently available 2-D high-resolution, optical molecular imaging system was modified by the addition of a structured illumination source, OptigridTM, to investigate the feasibility of providing depth resolution along the optical axis. The modification involved the insertion of the OptigridTM and a lens in the path between the light source and the image plane, as well as control and signal processing software. Projection of the OptigridTM onto the imaging surface at an angle, was resolved applying the Scheimpflug principle. The illumination system implements modulation of the light source and provides a framework for capturing depth resolved mages. The system is capable of in-focus projection of the OptigridTM at different spatial frequencies, and supports the use of different lenses. A calibration process was developed for the system to achieve consistent phase shifts of the OptigridTM. Post-processing extracted depth information using depth modulation analysis using a phantom block with fluorescent sheets at different depths. An important aspect of this effort was that it was carried out by a multidisciplinary team of engineering and science students as part of a capstone senior design program. The disciplines represented are mechanical engineering, electrical engineering and imaging science. The project was sponsored by a financial grant from New York State with equipment support from two industrial concerns. The students were provided with a basic imaging concept and charged with developing, implementing, testing and validating a feasible proof-of-concept prototype system that was returned to the originator of the concept for further evaluation and characterization.

  2. The application of camera calibration in range-gated 3D imaging technology

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-quan; Wang, Xian-wei; Zhou, Yan

    2013-09-01

    Range-gated laser imaging technology was proposed in 1966 by LF Gillespiethe in U.S. Army Night Vision Laboratory(NVL). Using pulse laser and intensified charge-coupled device(ICCD) as light source and detector respectively, range-gated laser imaging technology can realize space-slice imaging while restraining the atmospheric backs-catter, and in turn detect the target effectively, by controlling the delay between the laser pulse and strobe. Owing to the constraints of the development of key components such as narrow pulse laser and gated imaging devices, the research has been progressed slowly in the next few decades. Until the beginning of this century, as the hardware technology continues to mature, this technology has developed rapidly in fields such as night vision, underwater imaging, biomedical imaging, three-dimensional imaging, especially range-gated three-dimensional(3-D) laser imaging field purposing of access to target spatial information. 3-D reconstruction is the processing of restoration of 3-D objects visible surface geometric structure from three-dimensional(2-D) image. Range-gated laser imaging technology can achieve gated imaging of slice space to form a slice image, and in turn provide the distance information corresponding to the slice image. But to inverse the information of 3-D space, we need to obtain the imaging visual field of system, that is, the focal length of the system. Then based on the distance information of the space slice, the spatial information of each unit space corresponding to each pixel can be inversed. Camera calibration is an indispensable step in 3-D reconstruction, including analysis of the internal structure of camera parameters and the external parameters . In order to meet the technical requirements of the range-gated 3-D imaging, this paper intends to study the calibration of the zoom lens system. After summarizing the camera calibration technique comprehensively, a classic calibration method based on line is

  3. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  4. 3D Imaging for hand gesture recognition: Exploring the software-hardware interaction of current technologies

    NASA Astrophysics Data System (ADS)

    Periverzov, Frol; Ilieş, Horea T.

    2012-09-01

    Interaction with 3D information is one of the fundamental and most familiar tasks in virtually all areas of engineering and science. Several recent technological advances pave the way for developing hand gesture recognition capabilities available to all, which will lead to more intuitive and efficient 3D user interfaces (3DUI). These developments can unlock new levels of expression and productivity in all activities concerned with the creation and manipulation of virtual 3D shapes and, specifically, in engineering design. Building fully automated systems for tracking and interpreting hand gestures requires robust and efficient 3D imaging techniques as well as potent shape classifiers. We survey and explore current and emerging 3D imaging technologies, and focus, in particular, on those that can be used to build interfaces between the users' hands and the machine. The purpose of this paper is to categorize and highlight the relevant differences between these existing 3D imaging approaches in terms of the nature of the information provided, output data format, as well as the specific conditions under which these approaches yield reliable data. Furthermore we explore the impact of each of these approaches on the computational cost and reliability of the required image processing algorithms. Finally we highlight the main challenges and opportunities in developing natural user interfaces based on hand gestures, and conclude with some promising directions for future research. [Figure not available: see fulltext.

  5. From Wheatstone to Cameron and beyond: overview in 3-D and 4-D imaging technology

    NASA Astrophysics Data System (ADS)

    Gilbreath, G. Charmaine

    2012-02-01

    This paper reviews three-dimensional (3-D) and four-dimensional (4-D) imaging technology, from Wheatstone through today, with some prognostications for near future applications. This field is rich in variety, subject specialty, and applications. A major trend, multi-view stereoscopy, is moving the field forward to real-time wide-angle 3-D reconstruction as breakthroughs in parallel processing and multi-processor computers enable very fast processing. Real-time holography meets 4-D imaging reconstruction at the goal of achieving real-time, interactive, 3-D imaging. Applications to telesurgery and telemedicine as well as to the needs of the defense and intelligence communities are also discussed.

  6. 220GHz wideband 3D imaging radar for concealed object detection technology development and phenomenology studies

    NASA Astrophysics Data System (ADS)

    Robertson, Duncan A.; Macfarlane, David G.; Bryllert, Tomas

    2016-05-01

    We present a 220 GHz 3D imaging `Pathfinder' radar developed within the EU FP7 project CONSORTIS (Concealed Object Stand-Off Real-Time Imaging for Security) which has been built to address two objectives: (i) to de-risk the radar hardware development and (ii) to enable the collection of phenomenology data with ~1 cm3 volumetric resolution. The radar combines a DDS-based chirp generator and self-mixing multiplier technology to achieve a 30 GHz bandwidth chirp with such high linearity that the raw point response is close to ideal and only requires minor nonlinearity compensation. The single transceiver is focused with a 30 cm lens mounted on a gimbal to acquire 3D volumetric images of static test targets and materials.

  7. Real-Depth imaging: a new (no glasses) 3D imaging technology with video/data projection applications

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for anew, patent pending, 'floating 3D, off-the- screen-experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3D Graphics' technologies are actually flat on screen. Floating Images technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax and accommodation which coincides with convergence. Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed. The imagery must either be formatted for the Floating Images platform when written, or existing software can be reformatted without much difficult. The optical hardware system can be made to accommodate virtually any projection system to produce Floating Images for the Boardroom, video arcade, stage shows, or the classroom.

  8. Active and Passive 3d Imaging Technologies Applied to Waterlogged Wooden Artifacts from Shipwrecks

    NASA Astrophysics Data System (ADS)

    Bandiera, A.; Alfonso, C.; Auriemma, R.

    2015-04-01

    The fragility of organic artefacts in presence of water and their volumetric variation caused by the marine life on or surrounding them dictate that their physical dimensions be measured soon after their extraction from the seabed. In an ideal context, it would be appropriate to preserve and restore all the archaeological elements, rapidly and with the latest methods. Unfortunately however, the large number of artefacts makes the cost of such an operation prohibitive for a public institution. For this reason, digital technologies for documentation, restoration, display and conservation are being considered by many institutions working with limited budgets. In this paper, we illustrate the experience of the University of Salento in 3D imaging technology for waterlogged wooden artefacts from shipwrecks. The interest originates from the need to develop a protocol for documentation and digital restoration of archaeological finds discovered along the coast of Torre S. Sabina (BR) Italy. This work has allowed us to explore recent technologies for 3D acquisitions, both underwater and in the laboratory, as well as methods for data processing. These technologies have permitted us to start defining a protocol to follow for all waterlogged wooden artefacts requiring documentation and restoration.

  9. Teaching Geography with 3-D Visualization Technology

    ERIC Educational Resources Information Center

    Anthamatten, Peter; Ziegler, Susy S.

    2006-01-01

    Technology that helps students view images in three dimensions (3-D) can support a broad range of learning styles. "Geo-Wall systems" are visualization tools that allow scientists, teachers, and students to project stereographic images and view them in 3-D. We developed and presented 3-D visualization exercises in several undergraduate courses.…

  10. Development and Calibration of New 3-D Vector VSP Imaging Technology: Vinton Salt Dome, LA

    SciTech Connect

    Kurt J. Marfurt; Hua-Wei Zhou; E. Charlotte Sullivan

    2004-09-01

    Vinton salt dome is located in Southwestern Louisiana, in Calcasieu Parish. Tectonically, the piercement dome is within the salt dome minibasin province. The field has been in production since 1901, with most of the production coming from Miocene and Oligocene sands. The goal of our project was to develop and calibrate new processing and interpretation technology to fully exploit the information available from a simultaneous 3-D surface seismic survey and 3-C, 3-D vertical seismic profile (VSP) survey over the dome. More specifically the goal was to better image salt dome flanks and small, reservoir-compartmentalizing faults. This new technology has application to mature salt-related fields across the Gulf Coast. The primary focus of our effort was to develop, apply, and assess the limitations of new 3-C, 3-D wavefield separation and imaging technology that could be used to image aliased, limited-aperture, vector VSP data. Through 2-D and 3-D full elastic modeling, we verified that salt flank reflections exist in the horizontally-traveling portion of the wavefield rather than up- and down-going portions of the wavefield, thereby explaining why many commercial VSP processing flow failed. Since the P-wave reflections from the salt flank are measured primarily on the horizontal components while P-wave reflections from deeper sedimentary horizons are measured primarily on the vertical component, a true vector VSP analysis was needed. We developed an antialiased discrete Radon transform filter to accurately model P- and S-wave data components measured by the vector VSP. On-the-fly polarization filtering embedded in our Kirchhoff imaging algorithm was effective in separating PP from PS wave images. By the novel application of semblance-weighted filters, we were able to suppress many of the migration artifacts associated with low fold, sparse VSP acquisition geometries. To provide a better velocity/depth model, we applied 3-D prestack depth migration to the surface data

  11. Utilization of 3D imaging flash lidar technology for autonomous safe landing on planetary bodies

    NASA Astrophysics Data System (ADS)

    Amzajerdian, Farzin; Vanek, Michael; Petway, Larry; Pierrottet, Diego; Busch, George; Bulyshev, Alexander

    2010-01-01

    NASA considers Flash Lidar a critical technology for enabling autonomous safe landing of future large robotic and crewed vehicles on the surface of the Moon and Mars. Flash Lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes during the final stages of descent and landing. The onboard flight comptuer can use the 3-D map of terain to guide the vehicle to a safe site. The capabilities of Flash Lidar technology were evaluated through a series of static tests using a calibrated target and through dynamic tests aboard a helicopter and a fixed wing airctarft. The aircraft flight tests were perfomed over Moonlike terrain in the California and Nevada deserts. This paper briefly describes the Flash Lidar static and aircraft flight test results. These test results are analyzed against the landing application requirements to identify the areas of technology improvement. The ongoing technology advancement activities are then explained and their goals are described.

  12. Utilization of 3-D Imaging Flash Lidar Technology for Autonomous Safe Landing on Planetary Bodies

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Vanek, Michael; Petway, Larry; Pierrotter, Diego; Busch, George; Bulyshev, Alexander

    2010-01-01

    NASA considers Flash Lidar a critical technology for enabling autonomous safe landing of future large robotic and crewed vehicles on the surface of the Moon and Mars. Flash Lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes during the final stages of descent and landing. The onboard flight computer can use the 3-D map of terrain to guide the vehicle to a safe site. The capabilities of Flash Lidar technology were evaluated through a series of static tests using a calibrated target and through dynamic tests aboard a helicopter and a fixed wing aircraft. The aircraft flight tests were performed over Moon-like terrain in the California and Nevada deserts. This paper briefly describes the Flash Lidar static and aircraft flight test results. These test results are analyzed against the landing application requirements to identify the areas of technology improvement. The ongoing technology advancement activities are then explained and their goals are described.

  13. Stereoscopic-3D display design: a new paradigm with Intel Adaptive Stable Image Technology [IA-SIT

    NASA Astrophysics Data System (ADS)

    Jain, Sunil

    2012-03-01

    Stereoscopic-3D (S3D) proliferation on personal computers (PC) is mired by several technical and business challenges: a) viewing discomfort due to cross-talk amongst stereo images; b) high system cost; and c) restricted content availability. Users expect S3D visual quality to be better than, or at least equal to, what they are used to enjoying on 2D in terms of resolution, pixel density, color, and interactivity. Intel Adaptive Stable Image Technology (IA-SIT) is a foundational technology, successfully developed to resolve S3D system design challenges and deliver high quality 3D visualization at PC price points. Optimizations in display driver, panel timing firmware, backlight hardware, eyewear optical stack, and synch mechanism combined can help accomplish this goal. Agnostic to refresh rate, IA-SIT will scale with shrinking of display transistors and improvements in liquid crystal and LED materials. Industry could profusely benefit from the following calls to action:- 1) Adopt 'IA-SIT S3D Mode' in panel specs (via VESA) to help panel makers monetize S3D; 2) Adopt 'IA-SIT Eyewear Universal Optical Stack' and algorithm (via CEA) to help PC peripheral makers develop stylish glasses; 3) Adopt 'IA-SIT Real Time Profile' for sub-100uS latency control (via BT Sig) to extend BT into S3D; and 4) Adopt 'IA-SIT Architecture' for Monitors and TVs to monetize via PC attach.

  14. Heterodyne 3D ghost imaging

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan

    2016-06-01

    Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate.

  15. 3D scintigraphic imaging and navigation in radioguided surgery: freehand SPECT technology and its clinical applications.

    PubMed

    Bluemel, Christina; Matthies, Philipp; Herrmann, Ken; Povoski, Stephen P

    2016-01-01

    Freehand SPECT (fhSPECT) is a technology platform for providing 3-dimensional (3D) navigation for radioguided surgical procedures, such as sentinel lymph node (SLN) biopsy (SLNB). In addition to the information provided by conventional handheld gamma detection probes, fhSPECT allows for direct visualization of the distribution of radioactivity in any given region of interest, allowing for improved navigation to radioactive target lesions and providing accurate lesion depth measurements. Herein, we will review the currently available clinical data on the use of fhSPECT: (i) for SLNB of various malignancies, including difficult-to-detect SLNs, and (ii) for radioguided localization of solid tumors. Moreover, the combination of fhSPECT with other technologies (e.g., small field-of-view gamma cameras, and diagnostic ultrasound) is discussed. These technical advances have the potential to greatly expand the clinical application of radioguided surgery in the future. PMID:26878667

  16. The 3-D image recognition based on fuzzy neural network technology

    NASA Technical Reports Server (NTRS)

    Hirota, Kaoru; Yamauchi, Kenichi; Murakami, Jun; Tanaka, Kei

    1993-01-01

    Three dimensional stereoscopic image recognition system based on fuzzy-neural network technology was developed. The system consists of three parts; preprocessing part, feature extraction part, and matching part. Two CCD color camera image are fed to the preprocessing part, where several operations including RGB-HSV transformation are done. A multi-layer perception is used for the line detection in the feature extraction part. Then fuzzy matching technique is introduced in the matching part. The system is realized on SUN spark station and special image input hardware system. An experimental result on bottle images is also presented.

  17. Recent developments in stereoscopic and holographic 3D display technologies

    NASA Astrophysics Data System (ADS)

    Sarma, Kalluri

    2014-06-01

    Currently, there is increasing interest in the development of high performance 3D display technologies to support a variety of applications including medical imaging, scientific visualization, gaming, education, entertainment, air traffic control and remote operations in 3D environments. In this paper we will review the attributes of the various 3D display technologies including stereoscopic and holographic 3D, human factors issues of stereoscopic 3D, the challenges in realizing Holographic 3D displays and the recent progress in these technologies.

  18. Evolving technologies for growing, imaging and analyzing 3D root system architecture of crop plants.

    PubMed

    Piñeros, Miguel A; Larson, Brandon G; Shaff, Jon E; Schneider, David J; Falcão, Alexandre Xavier; Yuan, Lixing; Clark, Randy T; Craft, Eric J; Davis, Tyler W; Pradier, Pierre-Luc; Shaw, Nathanael M; Assaranurak, Ithipong; McCouch, Susan R; Sturrock, Craig; Bennett, Malcolm; Kochian, Leon V

    2016-03-01

    A plant's ability to maintain or improve its yield under limiting conditions, such as nutrient deficiency or drought, can be strongly influenced by root system architecture (RSA), the three-dimensional distribution of the different root types in the soil. The ability to image, track and quantify these root system attributes in a dynamic fashion is a useful tool in assessing desirable genetic and physiological root traits. Recent advances in imaging technology and phenotyping software have resulted in substantive progress in describing and quantifying RSA. We have designed a hydroponic growth system which retains the three-dimensional RSA of the plant root system, while allowing for aeration, solution replenishment and the imposition of nutrient treatments, as well as high-quality imaging of the root system. The simplicity and flexibility of the system allows for modifications tailored to the RSA of different crop species and improved throughput. This paper details the recent improvements and innovations in our root growth and imaging system which allows for greater image sensitivity (detection of fine roots and other root details), higher efficiency, and a broad array of growing conditions for plants that more closely mimic those found under field conditions. PMID:26683583

  19. GammaModeler TM 3-D gamma-ray imaging technology

    SciTech Connect

    2000-09-01

    The 3-D GammaModeler{trademark} system was used to survey a portion of the facility and provide 3-D visual and radiation representation of contaminated equipment located within the facility. The 3-D GammaModeler{trademark} system software was used to deconvolve extended sources into a series of point sources, locate the positions of these sources in space and calculate the 30 cm. dose rates for each of these sources. Localization of the sources in three dimensions provides information on source locations interior to the visual objects and provides a better estimate of the source intensities. The three dimensional representation of the objects can be made transparent in order to visualize sources located within the objects. Positional knowledge of all the sources can be used to calculate a map of the radiation in the canyon. The use of 3-D visual and gamma ray information supports improved planning decision-making, and aids in communications with regulators and stakeholders.

  20. 3D holoscopic video imaging system

    NASA Astrophysics Data System (ADS)

    Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher

    2012-03-01

    Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.

  1. Three-dimensional image technology in forensic anthropology: Assessing the validity of biological profiles derived from CT-3D images of the skeleton

    NASA Astrophysics Data System (ADS)

    Garcia de Leon Valenzuela, Maria Julia

    This project explores the reliability of building a biological profile for an unknown individual based on three-dimensional (3D) images of the individual's skeleton. 3D imaging technology has been widely researched for medical and engineering applications, and it is increasingly being used as a tool for anthropological inquiry. While the question of whether a biological profile can be derived from 3D images of a skeleton with the same accuracy as achieved when using dry bones has been explored, bigger sample sizes, a standardized scanning protocol and more interobserver error data are needed before 3D methods can become widely and confidently used in forensic anthropology. 3D images of Computed Tomography (CT) scans were obtained from 130 innominate bones from Boston University's skeletal collection (School of Medicine). For each bone, both 3D images and original bones were assessed using the Phenice and Suchey-Brooks methods. Statistical analysis was used to determine the agreement between 3D image assessment versus traditional assessment. A pool of six individuals with varying experience in the field of forensic anthropology scored a subsample (n = 20) to explore interobserver error. While a high agreement was found for age and sex estimation for specimens scored by the author, the interobserver study shows that observers found it difficult to apply standard methods to 3D images. Higher levels of experience did not result in higher agreement between observers, as would be expected. Thus, a need for training in 3D visualization before applying anthropological methods to 3D bones is suggested. Future research should explore interobserver error using a larger sample size in order to test the hypothesis that training in 3D visualization will result in a higher agreement between scores. The need for the development of a standard scanning protocol focusing on the optimization of 3D image resolution is highlighted. Applications for this research include the possibility

  2. Ames Lab 101: Real-Time 3D Imaging

    ScienceCinema

    Zhang, Song

    2012-08-29

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  3. DLP technology application: 3D head tracking and motion correction in medical brain imaging

    NASA Astrophysics Data System (ADS)

    Olesen, Oline V.; Wilm, Jakob; Paulsen, Rasmus R.; Højgaard, Liselotte; Larsen, Rasmus

    2014-03-01

    In this paper we present a novel sensing system, robust Near-infrared Structured Light Scanning (NIRSL) for three-dimensional human model scanning application. Human model scanning due to its nature of various hair and dress appearance and body motion has long been a challenging task. Previous structured light scanning methods typically emitted visible coded light patterns onto static and opaque objects to establish correspondence between a projector and a camera for triangulation. In the success of these methods rely on scanning objects with proper reflective surface for visible light, such as plaster, light colored cloth. Whereas for human model scanning application, conventional methods suffer from low signal to noise ratio caused by low contrast of visible light over the human body. The proposed robust NIRSL, as implemented with the near infrared light, is capable of recovering those dark surfaces, such as hair, dark jeans and black shoes under visible illumination. Moreover, successful structured light scan relies on the assumption that the subject is static during scanning. Due to the nature of body motion, it is very time sensitive to keep this assumption in the case of human model scan. The proposed sensing system, by utilizing the new near-infrared capable high speed LightCrafter DLP projector, is robust to motion, provides accurate and high resolution three-dimensional point cloud, making our system more efficient and robust for human model reconstruction. Experimental results demonstrate that our system is effective and efficient to scan real human models with various dark hair, jeans and shoes, robust to human body motion and produces accurate and high resolution 3D point cloud.

  4. Helicopter Flight Test of 3-D Imaging Flash LIDAR Technology for Safe, Autonomous, and Precise Planetary Landing

    NASA Technical Reports Server (NTRS)

    Roback, Vincent; Bulyshev, Alexander; Amzajerdian, Farzin; Reisse, Robert

    2013-01-01

    Two flash lidars, integrated from a number of cutting-edge components from industry and NASA, are lab characterized and flight tested for determination of maximum operational range under the Autonomous Landing and Hazard Avoidance Technology (ALHAT) project (in its fourth development and field test cycle) which is seeking to develop a guidance, navigation, and control (GN&C) and sensing system based on lidar technology capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The flash lidars incorporate pioneering 3-D imaging cameras based on Indium-Gallium-Arsenide Avalanche Photo Diode (InGaAs APD) and novel micro-electronic technology for a 128 x 128 pixel array operating at 30 Hz, high pulse-energy 1.06 micrometer Nd:YAG lasers, and high performance transmitter and receiver fixed and zoom optics. The two flash lidars are characterized on the NASA-Langley Research Center (LaRC) Sensor Test Range, integrated with other portions of the ALHAT GN&C system from partner organizations into an instrument pod at NASA-JPL, integrated onto an Erickson Aircrane Helicopter at NASA-Dryden, and flight tested at the Edwards AFB Rogers dry lakebed over a field of human-made geometric hazards during the summer of 2010. Results show that the maximum operational range goal of 1 km is met and exceeded up to a value of 1.2 km. In addition, calibrated 3-D images of several hazards are acquired in real-time for later reconstruction into Digital Elevation Maps (DEM's).

  5. Helicopter flight test of 3D imaging flash LIDAR technology for safe, autonomous, and precise planetary landing

    NASA Astrophysics Data System (ADS)

    Roback, Vincent; Bulyshev, Alexander; Amzajerdian, Farzin; Reisse, Robert

    2013-05-01

    Two flash lidars, integrated from a number of cutting-edge components from industry and NASA, are lab characterized and flight tested for determination of maximum operational range under the Autonomous Landing and Hazard Avoidance Technology (ALHAT) project (in its fourth development and field test cycle) which is seeking to develop a guidance, navigation, and control (GNC) and sensing system based on lidar technology capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The flash lidars incorporate pioneering 3-D imaging cameras based on Indium-Gallium-Arsenide Avalanche Photo Diode (InGaAs APD) and novel micro-electronic technology for a 128 x 128 pixel array operating at 30 Hz, high pulse-energy 1.06 μm Nd:YAG lasers, and high performance transmitter and receiver fixed and zoom optics. The two flash lidars are characterized on the NASA-Langley Research Center (LaRC) Sensor Test Range, integrated with other portions of the ALHAT GNC system from partner organizations into an instrument pod at NASA-JPL, integrated onto an Erickson Aircrane Helicopter at NASA-Dryden, and flight tested at the Edwards AFB Rogers dry lakebed over a field of humanmade geometric hazards during the summer of 2010. Results show that the maximum operational range goal of 1 km is met and exceeded up to a value of 1.2 km. In addition, calibrated 3-D images of several hazards are acquired in realtime for later reconstruction into Digital Elevation Maps (DEM's).

  6. 3D optical measuring technologies and systems

    NASA Astrophysics Data System (ADS)

    Chugui, Yuri V.

    2005-02-01

    The results of the R & D activity of TDI SIE SB RAS in the field of the 3D optical measuring technologies and systems for noncontact 3D optical dimensional inspection applied to atomic and railway industry safety problems are presented. This activity includes investigations of diffraction phenomena on some 3D objects, using the original constructive calculation method. The efficient algorithms for precise determining the transverse and longitudinal sizes of 3D objects of constant thickness by diffraction method, peculiarities on formation of the shadow and images of the typical elements of the extended objects were suggested. Ensuring the safety of nuclear reactors and running trains as well as their high exploitation reliability requires a 100% noncontact precise inspection of geometrical parameters of their components. To solve this problem we have developed methods and produced the technical vision measuring systems LMM, CONTROL, PROFIL, and technologies for noncontact 3D dimensional inspection of grid spacers and fuel elements for the nuclear reactor VVER-1000 and VVER-440, as well as automatic laser diagnostic COMPLEX for noncontact inspection of geometric parameters of running freight car wheel pairs. The performances of these systems and the results of industrial testing are presented and discussed. The created devices are in pilot operation at Atomic and Railway Companies.

  7. Real-Depth imaging: a new 3D imaging technology with inexpensive direct-view (no glasses) video and other applications

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for a new, patent pending, 'floating 3-D, off-the-screen- experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3-D graphics' technologies are actually flat on screen. Floating ImagesTM technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3-D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax (the ability to look around foreground objects to see previously hidden background objects, with each eye seeing a different view at all times) and accommodation (the need to re-focus one's eyes when shifting attention from a near object to a distant object) which coincides with convergence (the need to re-aim one's eyes when shifting attention from a near object to a distant object). Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed (unlike stereoscopic and autostereoscopic displays). The imagery (video or computer generated) must either be formatted for the Floating ImagesTM platform when written or existing software can be re-formatted without much difficulty.

  8. High definition 3D ultrasound imaging.

    PubMed

    Morimoto, A K; Krumm, J C; Kozlowski, D M; Kuhlmann, J L; Wilson, C; Little, C; Dickey, F M; Kwok, K S; Rogers, B; Walsh, N

    1997-01-01

    We have demonstrated high definition and improved resolution using a novel scanning system integrated with a commercial ultrasound machine. The result is a volumetric 3D ultrasound data set that can be visualized using standard techniques. Unlike other 3D ultrasound images, image quality is improved from standard 2D data. Image definition and bandwidth is improved using patent pending techniques. The system can be used to image patients or wounded soldiers for general imaging of anatomy such as abdominal organs, extremities, and the neck. Although the risks associated with x-ray carcinogenesis are relatively low at diagnostic dose levels, concerns remain for individuals in high risk categories. In addition, cost and portability of CT and MRI machines can be prohibitive. In comparison, ultrasound can provide portable, low-cost, non-ionizing imaging. Previous clinical trials comparing ultrasound to CT were used to demonstrate qualitative and quantitative improvements of ultrasound using the Sandia technologies. Transverse leg images demonstrated much higher clarity and lower noise than is seen in traditional ultrasound images. An x-ray CT scan was provided of the same cross-section for comparison. The results of our most recent trials demonstrate the advantages of 3D ultrasound and motion compensation compared with 2D ultrasound. Metal objects can also be observed within the anatomy. PMID:10168958

  9. Automatic needle segmentation in 3D ultrasound images using 3D Hough transform

    NASA Astrophysics Data System (ADS)

    Zhou, Hua; Qiu, Wu; Ding, Mingyue; Zhang, Songgeng

    2007-12-01

    3D ultrasound (US) is a new technology that can be used for a variety of diagnostic applications, such as obstetrical, vascular, and urological imaging, and has been explored greatly potential in the applications of image-guided surgery and therapy. Uterine adenoma and uterine bleeding are the two most prevalent diseases in Chinese woman, and a minimally invasive ablation system using an RF button electrode which is needle-like is being used to destroy tumor cells or stop bleeding currently. Now a 3D US guidance system has been developed to avoid accidents or death of the patient by inaccurate localizations of the electrode and the tumor position during treatment. In this paper, we described two automated techniques, the 3D Hough Transform (3DHT) and the 3D Randomized Hough Transform (3DRHT), which is potentially fast, accurate, and robust to provide needle segmentation in 3D US image for use of 3D US imaging guidance. Based on the representation (Φ , θ , ρ , α ) of straight lines in 3D space, we used the 3DHT algorithm to segment needles successfully assumed that the approximate needle position and orientation are known in priori. The 3DRHT algorithm was developed to detect needles quickly without any information of the 3D US images. The needle segmentation techniques were evaluated using the 3D US images acquired by scanning water phantoms. The experiments demonstrated the feasibility of two 3D needle segmentation algorithms described in this paper.

  10. Accuracy in Quantitative 3D Image Analysis

    PubMed Central

    Bassel, George W.

    2015-01-01

    Quantitative 3D imaging is becoming an increasingly popular and powerful approach to investigate plant growth and development. With the increased use of 3D image analysis, standards to ensure the accuracy and reproducibility of these data are required. This commentary highlights how image acquisition and postprocessing can introduce artifacts into 3D image data and proposes steps to increase both the accuracy and reproducibility of these analyses. It is intended to aid researchers entering the field of 3D image processing of plant cells and tissues and to help general readers in understanding and evaluating such data. PMID:25804539

  11. Geomatics for precise 3D breast imaging.

    PubMed

    Alto, Hilary

    2005-02-01

    Canadian women have a one in nine chance of developing breast cancer during their lifetime. Mammography is the most common imaging technology used for breast cancer detection in its earliest stages through screening programs. Clusters of microcalcifications are primary indicators of breast cancer; the shape, size and number may be used to determine whether they are malignant or benign. However, overlapping images of calcifications on a mammogram hinder the classification of the shape and size of each calcification and a misdiagnosis may occur resulting in either an unnecessary biopsy being performed or a necessary biopsy not being performed. The introduction of 3D imaging techniques such as standard photogrammetry may increase the confidence of the radiologist when making his/her diagnosis. In this paper, traditional analytical photogrammetric techniques for the 3D mathematical reconstruction of microcalcifications are presented. The techniques are applied to a specially designed and constructed x-ray transparent Plexiglas phantom (control object). The phantom was embedded with 1.0 mm x-ray opaque lead pellets configured to represent overlapping microcalcifications. Control points on the phantom were determined by standard survey methods and hand measurements. X-ray films were obtained using a LORAD M-III mammography machine. The photogrammetric techniques of relative and absolute orientation were applied to the 2D mammographic films to analytically generate a 3D depth map with an overall accuracy of 0.6 mm. A Bundle Adjustment and the Direct Linear Transform were used to confirm the results. PMID:15649085

  12. Progress in 3D imaging and display by integral imaging

    NASA Astrophysics Data System (ADS)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  13. 3-D Technology Approaches for Biological Ecologies

    NASA Astrophysics Data System (ADS)

    Liu, Liyu; Austin, Robert; U. S-China Physical-Oncology Sciences Alliance (PS-OA) Team

    Constructing three dimensional (3-D) landscapes is an inevitable issue in deep study of biological ecologies, because in whatever scales in nature, all of the ecosystems are composed by complex 3-D environments and biological behaviors. Just imagine if a 3-D technology could help complex ecosystems be built easily and mimic in vivo microenvironment realistically with flexible environmental controls, it will be a fantastic and powerful thrust to assist researchers for explorations. For years, we have been utilizing and developing different technologies for constructing 3-D micro landscapes for biophysics studies in in vitro. Here, I will review our past efforts, including probing cancer cell invasiveness with 3-D silicon based Tepuis, constructing 3-D microenvironment for cell invasion and metastasis through polydimethylsiloxane (PDMS) soft lithography, as well as explorations of optimized stenting positions for coronary bifurcation disease with 3-D wax printing and the latest home designed 3-D bio-printer. Although 3-D technologies is currently considered not mature enough for arbitrary 3-D micro-ecological models with easy design and fabrication, I hope through my talk, the audiences will be able to sense its significance and predictable breakthroughs in the near future. This work was supported by the State Key Development Program for Basic Research of China (Grant No. 2013CB837200), the National Natural Science Foundation of China (Grant No. 11474345) and the Beijing Natural Science Foundation (Grant No. 7154221).

  14. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  15. Imaging a Sustainable Future in 3D

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  16. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  17. Volumetric image display for complex 3D data visualization

    NASA Astrophysics Data System (ADS)

    Tsao, Che-Chih; Chen, Jyh Shing

    2000-05-01

    A volumetric image display is a new display technology capable of displaying computer generated 3D images in a volumetric space. Many viewers can walk around the display and see the image from omni-directions simultaneously without wearing any glasses. The image is real and possesses all major elements in both physiological and psychological depth cues. Due to the volumetric nature of its image, the VID can provide the most natural human-machine interface in operations involving 3D data manipulation and 3D targets monitoring. The technology creates volumetric 3D images by projecting a series of profiling images distributed in the space form a volumetric image because of the after-image effect of human eyes. Exemplary applications in biomedical image visualization were tested on a prototype display, using different methods to display a data set from Ct-scans. The features of this display technology make it most suitable for applications that require quick understanding of the 3D relations, need frequent spatial interactions with the 3D images, or involve time-varying 3D data. It can also be useful for group discussion and decision making.

  18. Automatic needle segmentation in 3D ultrasound images using 3D improved Hough transform

    NASA Astrophysics Data System (ADS)

    Zhou, Hua; Qiu, Wu; Ding, Mingyue; Zhang, Songgen

    2008-03-01

    3D ultrasound (US) is a new technology that can be used for a variety of diagnostic applications, such as obstetrical, vascular, and urological imaging, and has been explored greatly potential in the applications of image-guided surgery and therapy. Uterine adenoma and uterine bleeding are the two most prevalent diseases in Chinese woman, and a minimally invasive ablation system using a needle-like RF button electrode is widely used to destroy tumor cells or stop bleeding. To avoid accidents or death of the patient by inaccurate localizations of the electrode and the tumor position during treatment, 3D US guidance system was developed. In this paper, a new automated technique, the 3D Improved Hough Transform (3DIHT) algorithm, which is potentially fast, accurate, and robust to provide needle segmentation in 3D US image for use of 3D US imaging guidance, was presented. Based on the coarse-fine search strategy and a four parameter representation of lines in 3D space, 3DIHT algorithm can segment needles quickly, accurately and robustly. The technique was evaluated using the 3D US images acquired by scanning a water phantom. The segmentation position deviation of the line was less than 2mm and angular deviation was much less than 2°. The average computational time measured on a Pentium IV 2.80GHz PC computer with a 381×381×250 image was less than 2s.

  19. Evaluation of 3D imaging.

    PubMed

    Vannier, M W

    2000-10-01

    Interactive computer-based simulation is gaining acceptance for craniofacial surgical planning. Subjective visualization without objective measurement capability, however, severely limits the value of simulation since spatial accuracy must be maintained. This study investigated the error sources involved in one method of surgical simulation evaluation. Linear and angular measurement errors were found to be within +/- 1 mm and 1 degree. Surface match of scanned objects was slightly less accurate, with errors up to 3 voxels and 4 degrees, and Boolean subtraction methods were 93 to 99% accurate. Once validated, these testing methods were applied to objectively compare craniofacial surgical simulations to post-operative outcomes, and verified that the form of simulation used in this study yields accurate depictions of surgical outcome. However, to fully evaluate surgical simulation, future work is still required to test the new methods in sufficient numbers of patients to achieve statistically significant results. Once completely validated, simulation cannot only be used in pre-operative surgical planning, but also as a post-operative descriptor of surgical and traumatic physical changes. Validated image comparison methods can also show discrepancy of surgical outcome to surgical plan, thus allowing evaluation of surgical technique. PMID:11098409

  20. Continuous section extraction and over-underbreak detection of tunnel based on 3D laser technology and image analysis

    NASA Astrophysics Data System (ADS)

    Wang, Weixing; Wang, Zhiwei; Han, Ya; Li, Shuang; Zhang, Xin

    2015-03-01

    Over Underbreak detection of road and solve the problemof the roadway data collection difficulties, this paper presents a new method of continuous section extraction and Over Underbreak detection of road based on 3D laser scanning technology and image processing, the method is divided into the following three steps: based on Canny edge detection, local axis fitting, continuous extraction section and Over Underbreak detection of section. First, after Canny edge detection, take the least-squares curve fitting method to achieve partial fitting in axis. Then adjust the attitude of local roadway that makes the axis of the roadway be consistent with the direction of the extraction reference, and extract section along the reference direction. Finally, we compare the actual cross-sectional view and the cross-sectional design to complete Overbreak detected. Experimental results show that the proposed method have a great advantage in computing costs and ensure cross-section orthogonal intercept terms compared with traditional detection methods.

  1. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  2. Miniaturized 3D microscope imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  3. Structured light field 3D imaging.

    PubMed

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Yin, Yongkai; Li, Ameng; Wu, Jiachen; Gao, Bruce Z

    2016-09-01

    In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces. PMID:27607639

  4. 3D EIT image reconstruction with GREIT.

    PubMed

    Grychtol, Bartłomiej; Müller, Beat; Adler, Andy

    2016-06-01

    Most applications of thoracic EIT use a single plane of electrodes on the chest from which a transverse image 'slice' is calculated. However, interpretation of EIT images is made difficult by the large region above and below the electrode plane to which EIT is sensitive. Volumetric EIT images using two (or more) electrode planes should help compensate, but are little used currently. The Graz consensus reconstruction algorithm for EIT (GREIT) has become popular in lung EIT. One shortcoming of the original formulation of GREIT is its restriction to reconstruction onto a 2D planar image. We present an extension of the GREIT algorithm to 3D and develop open-source tools to evaluate its performance as a function of the choice of stimulation and measurement pattern. Results show 3D GREIT using two electrode layers has significantly more uniform sensitivity profiles through the chest region. Overall, the advantages of 3D EIT are compelling. PMID:27203184

  5. Diffractive optical element for creating visual 3D images.

    PubMed

    Goncharsky, Alexander; Goncharsky, Anton; Durlevich, Svyatoslav

    2016-05-01

    A method is proposed to compute and synthesize the microrelief of a diffractive optical element to produce a new visual security feature - the vertical 3D/3D switch effect. The security feature consists in the alternation of two 3D color images when the diffractive element is tilted up/down. Optical security elements that produce the new security feature are synthesized using electron-beam technology. Sample optical security elements are manufactured that produce 3D to 3D visual switch effect when illuminated by white light. Photos and video records of the vertical 3D/3D switch effect of real optical elements are presented. The optical elements developed can be replicated using standard equipment employed for manufacturing security holograms. The new optical security feature is easy to control visually, safely protected against counterfeit, and designed to protect banknotes, documents, ID cards, etc. PMID:27137530

  6. Personalized development of human organs using 3D printing technology.

    PubMed

    Radenkovic, Dina; Solouk, Atefeh; Seifalian, Alexander

    2016-02-01

    3D printing is a technique of fabricating physical models from a 3D volumetric digital image. The image is sliced and printed using a specific material into thin layers, and successive layering of the material produces a 3D model. It has already been used for printing surgical models for preoperative planning and in constructing personalized prostheses for patients. The ultimate goal is to achieve the development of functional human organs and tissues, to overcome limitations of organ transplantation created by the lack of organ donors and life-long immunosuppression. We hypothesized a precision medicine approach to human organ fabrication using 3D printed technology, in which the digital volumetric data would be collected by imaging of a patient, i.e. CT or MRI images followed by mathematical modeling to create a digital 3D image. Then a suitable biocompatible material, with an optimal resolution for cells seeding and maintenance of cell viability during the printing process, would be printed with a compatible printer type and finally implanted into the patient. Life-saving operations with 3D printed implants were already performed in patients. However, several issues need to be addressed before translational application of 3D printing into clinical medicine. These are vascularization, innervation, and financial cost of 3D printing and safety of biomaterials used for the construct. PMID:26826637

  7. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  8. Acquisition and applications of 3D images

    NASA Astrophysics Data System (ADS)

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  9. 3D camera tracking from disparity images

    NASA Astrophysics Data System (ADS)

    Kim, Kiyoung; Woo, Woontack

    2005-07-01

    In this paper, we propose a robust camera tracking method that uses disparity images computed from known parameters of 3D camera and multiple epipolar constraints. We assume that baselines between lenses in 3D camera and intrinsic parameters are known. The proposed method reduces camera motion uncertainty encountered during camera tracking. Specifically, we first obtain corresponding feature points between initial lenses using normalized correlation method. In conjunction with matching features, we get disparity images. When the camera moves, the corresponding feature points, obtained from each lens of 3D camera, are robustly tracked via Kanade-Lukas-Tomasi (KLT) tracking algorithm. Secondly, relative pose parameters of each lens are calculated via Essential matrices. Essential matrices are computed from Fundamental matrix calculated using normalized 8-point algorithm with RANSAC scheme. Then, we determine scale factor of translation matrix by d-motion. This is required because the camera motion obtained from Essential matrix is up to scale. Finally, we optimize camera motion using multiple epipolar constraints between lenses and d-motion constraints computed from disparity images. The proposed method can be widely adopted in Augmented Reality (AR) applications, 3D reconstruction using 3D camera, and fine surveillance systems which not only need depth information, but also camera motion parameters in real-time.

  10. 3D Image Display Courses for Information Media Students.

    PubMed

    Yanaka, Kazuhisa; Yamanouchi, Toshiaki

    2016-01-01

    Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators. PMID:26960028

  11. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    SciTech Connect

    Morimoto, A.K.; Bow, W.J.; Strong, D.S.

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  12. Walker Ranch 3D seismic images

    DOE Data Explorer

    Robert J. Mellors

    2016-03-01

    Amplitude images (both vertical and depth slices) extracted from 3D seismic reflection survey over area of Walker Ranch area (adjacent to Raft River). Crossline spacing of 660 feet and inline of 165 feet using a Vibroseis source. Processing included depth migration. Micro-earthquake hypocenters on images. Stratigraphic information and nearby well tracks added to images. Images are embedded in a Microsoft Word document with additional information. Exact location and depth restricted for proprietary reasons. Data collection and processing funded by Agua Caliente. Original data remains property of Agua Caliente.

  13. Backhoe 3D "gold standard" image

    NASA Astrophysics Data System (ADS)

    Gorham, LeRoy; Naidu, Kiranmai D.; Majumder, Uttam; Minardi, Michael A.

    2005-05-01

    ViSUAl-D (VIsual Sar Using ALl Dimensions), a 2004 DARPA/IXO seedling effort, is developing a capability for reliable high confidence ID from standoff ranges. Recent conflicts have demonstrated that the warfighter would greatly benefit from the ability to ID targets beyond visual and electro-optical ranges[1]. Forming optical-quality SAR images while exploiting full polarization, wide angles, and large bandwidth would be key evidence such a capability is achievable. Using data generated by the Xpatch EM scattering code, ViSUAl-D investigates all degrees of freedom available to the radar designer, including 6 GHz bandwidth, full polarization and angle sampling over 2π steradians (upper hemisphere), in order to produce a "literal" image or representation of the target. This effort includes the generation of a "Gold Standard" image that can be produced at X-band utilizing all available target data. This "Gold Standard" image of the backhoe will serve as a test bed for future more relevant military targets and their image development. The seedling team produced a public release data which was released at the 2004 SPIE conference, as well as a 3D "Gold Standard" backhoe image using a 3D image formation algorithm. This paper describes the full backhoe data set, the image formation algorithm, the visualization process and the resulting image.

  14. Metrological characterization of 3D imaging devices

    NASA Astrophysics Data System (ADS)

    Guidi, G.

    2013-04-01

    Manufacturers often express the performance of a 3D imaging device in various non-uniform ways for the lack of internationally recognized standard requirements for metrological parameters able to identify the capability of capturing a real scene. For this reason several national and international organizations in the last ten years have been developing protocols for verifying such performance. Ranging from VDI/VDE 2634, published by the Association of German Engineers and oriented to the world of mechanical 3D measurements (triangulation-based devices), to the ASTM technical committee E57, working also on laser systems based on direct range detection (TOF, Phase Shift, FM-CW, flash LADAR), this paper shows the state of the art about the characterization of active range devices, with special emphasis on measurement uncertainty, accuracy and resolution. Most of these protocols are based on special objects whose shape and size are certified with a known level of accuracy. By capturing the 3D shape of such objects with a range device, a comparison between the measured points and the theoretical shape they should represent is possible. The actual deviations can be directly analyzed or some derived parameters can be obtained (e.g. angles between planes, distances between barycenters of spheres rigidly connected, frequency domain parameters, etc.). This paper shows theoretical aspects and experimental results of some novel characterization methods applied to different categories of active 3D imaging devices based on both principles of triangulation and direct range detection.

  15. 3D MR imaging in real time

    NASA Astrophysics Data System (ADS)

    Guttman, Michael A.; McVeigh, Elliot R.

    2001-05-01

    A system has been developed to produce live 3D volume renderings from an MR scanner. Whereas real-time 2D MR imaging has been demonstrated by several groups, 3D volumes are currently rendered off-line to gain greater understanding of anatomical structures. For example, surgical planning is sometimes performed by viewing 2D images or 3D renderings from previously acquired image data. A disadvantage of this approach is misregistration which could occur if the anatomy changes due to normal muscle contractions or surgical manipulation. The ability to produce volume renderings in real-time and present them in the magnet room could eliminate this problem, and enable or benefit other types of interventional procedures. The system uses the data stream generated by a fast 2D multi- slice pulse sequence to update a volume rendering immediately after a new slice is available. We demonstrate some basic types of user interaction with the rendering during imaging at a rate of up to 20 frames per second.

  16. A 3D surface imaging system for assessing human obesity

    NASA Astrophysics Data System (ADS)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  17. 3D Technology for intelligent trackers

    SciTech Connect

    Lipton, Ronald; /Fermilab

    2010-09-01

    At Super-LHC luminosity it is expected that the standard suite of level 1 triggers for CMS will saturate. Information from the tracker will be needed to reduce trigger rates to satisfy the level 1 bandwidth. Tracking trigger modules which correlate information from closely-spaced sensor layers to form an on-detector momentum filter are being developed by several groups. We report on a trigger module design which utilizes three dimensional integrated circuit technology incorporating chips which are connected both to the top and bottom sensor, providing the ability to filter information locally. A demonstration chip, the VICTR, has been submitted to the Chartered/Tezzaron two-tier 3D run coordinated by Fermilab. We report on the 3D design concept, the status of the VICTR chip and associated sensor integration utilizing oxide bonding.

  18. Image based 3D city modeling : Comparative study

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  19. Pattern based 3D image Steganography

    NASA Astrophysics Data System (ADS)

    Thiyagarajan, P.; Natarajan, V.; Aghila, G.; Prasanna Venkatesan, V.; Anitha, R.

    2013-03-01

    This paper proposes a new high capacity Steganographic scheme using 3D geometric models. The novel algorithm re-triangulates a part of a triangle mesh and embeds the secret information into newly added position of triangle meshes. Up to nine bits of secret data can be embedded into vertices of a triangle without causing any changes in the visual quality and the geometric properties of the cover image. Experimental results show that the proposed algorithm is secure, with high capacity and low distortion rate. Our algorithm also resists against uniform affine transformations such as cropping, rotation and scaling. Also, the performance of the method is compared with other existing 3D Steganography algorithms. [Figure not available: see fulltext.

  20. Teat Morphology Characterization With 3D Imaging.

    PubMed

    Vesterinen, Heidi M; Corfe, Ian J; Sinkkonen, Ville; Iivanainen, Antti; Jernvall, Jukka; Laakkonen, Juha

    2015-07-01

    The objective of this study was to visualize, in a novel way, the morphological characteristics of bovine teats to gain a better understanding of the detailed teat morphology. We applied silicone casting and 3D digital imaging in order to obtain a more detailed image of the teat structures than that seen in previous studies. Teat samples from 65 dairy cows over 12 months of age were obtained from cows slaughtered at an abattoir. The teats were classified according to the teat condition scoring used in Finland and the lengths of the teat canals were measured. Silicone molds were made from the external teat surface surrounding the teat orifice and from the internal surface of the teat consisting of the papillary duct, Fürstenberg's rosette, and distal part of the teat cistern. The external and internal surface molds of 35 cows were scanned with a 3D laser scanner. The molds and the digital 3D models were used to evaluate internal and external teat surface morphology. A number of measurements were taken from the silicone molds. The 3D models reproduced the morphology of the teats accurately with high repeatability. Breed didn't correlate with the teat classification score. The rosette was found to have significant variation in its size and number of mucosal folds. The internal surface morphology of the rosette did not correlate with the external surface morphology of the teat implying that it is relatively independent of milking parameters that may impact the teat canal and the external surface of the teat. PMID:25382725

  1. 3D seismic image processing for interpretation

    NASA Astrophysics Data System (ADS)

    Wu, Xinming

    Extracting fault, unconformity, and horizon surfaces from a seismic image is useful for interpretation of geologic structures and stratigraphic features. Although interpretation of these surfaces has been automated to some extent by others, significant manual effort is still required for extracting each type of these geologic surfaces. I propose methods to automatically extract all the fault, unconformity, and horizon surfaces from a 3D seismic image. To a large degree, these methods just involve image processing or array processing which is achieved by efficiently solving partial differential equations. For fault interpretation, I propose a linked data structure, which is simpler than triangle or quad meshes, to represent a fault surface. In this simple data structure, each sample of a fault corresponds to exactly one image sample. Using this linked data structure, I extract complete and intersecting fault surfaces without holes from 3D seismic images. I use the same structure in subsequent processing to estimate fault slip vectors. I further propose two methods, using precomputed fault surfaces and slips, to undo faulting in seismic images by simultaneously moving fault blocks and faults themselves. For unconformity interpretation, I first propose a new method to compute a unconformity likelihood image that highlights both the termination areas and the corresponding parallel unconformities and correlative conformities. I then extract unconformity surfaces from the likelihood image and use these surfaces as constraints to more accurately estimate seismic normal vectors that are discontinuous near the unconformities. Finally, I use the estimated normal vectors and use the unconformities as constraints to compute a flattened image, in which seismic reflectors are all flat and vertical gaps correspond to the unconformities. Horizon extraction is straightforward after computing a map of image flattening; we can first extract horizontal slices in the flattened space

  2. Active and interactive floating image display using holographic 3D images

    NASA Astrophysics Data System (ADS)

    Morii, Tsutomu; Sakamoto, Kunio

    2006-08-01

    We developed a prototype tabletop holographic display system. This system consists of the object recognition system and the spatial imaging system. In this paper, we describe the recognition system using an RFID tag and the 3D display system using a holographic technology. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1,2,3. The purpose of this paper is to propose the interactive system using these 3D imaging technologies. In this paper, the authors describe the interactive tabletop 3D display system. The observer can view virtual images when the user puts the special object on the display table. The key technologies of this system are the object recognition system and the spatial imaging display.

  3. 3D GPR Imaging of Wooden Logs

    NASA Astrophysics Data System (ADS)

    Halabe, Udaya B.; Pyakurel, Sandeep

    2007-03-01

    There has been a lack of an effective NDE technique to locate internal defects within wooden logs. The few available elastic wave propagation based techniques are limited to predicting E values. Other techniques such as X-rays have not been very successful in detecting internal defects in logs. If defects such as embedded metals could be identified before the sawing process, the saw mills could significantly increase their production by reducing the probability of damage to the saw blade and the associated downtime and the repair cost. Also, if the internal defects such as knots and decayed areas could be identified in logs, the sawing blade can be oriented to exclude the defective portion and optimize the volume of high valued lumber that can be obtained from the logs. In this research, GPR has been successfully used to locate internal defects (knots, decays and embedded metals) within the logs. This paper discusses GPR imaging and mapping of the internal defects using both 2D and 3D interpretation methodology. Metal pieces were inserted in a log and the reflection patterns from these metals were interpreted from the radargrams acquired using 900 MHz antenna. Also, GPR was able to accurately identify the location of knots and decays. Scans from several orientations of the log were collected to generate 3D cylindrical volume. The actual location of the defects showed good correlation with the interpreted defects in the 3D volume. The time/depth slices from 3D cylindrical volume data were useful in understanding the extent of defects inside the log.

  4. EISCAT Aperture Synthesis Imaging (EASI _3D) for the EISCAT_3D Project

    NASA Astrophysics Data System (ADS)

    La Hoz, Cesar; Belyey, Vasyl

    2012-07-01

    Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. The underlying physico-mathematical principles of the technique are the same as the technique employed in radioastronomy to image stellar objects; both require sophisticated inversion techniques to obtain reliable images.

  5. 3-D SAR image formation from sparse aperture data using 3-D target grids

    NASA Astrophysics Data System (ADS)

    Bhalla, Rajan; Li, Junfei; Ling, Hao

    2005-05-01

    The performance of ATR systems can potentially be improved by using three-dimensional (3-D) SAR images instead of the traditional two-dimensional SAR images or one-dimensional range profiles. 3-D SAR image formation of targets from radar backscattered data collected on wide angle, sparse apertures has been identified by AFRL as fundamental to building an object detection and recognition capability. A set of data has been released as a challenge problem. This paper describes a technique based on the concept of 3-D target grids aimed at the formation of 3-D SAR images of targets from sparse aperture data. The 3-D target grids capture the 3-D spatial and angular scattering properties of the target and serve as matched filters for SAR formation. The results of 3-D SAR formation using the backhoe public release data are presented.

  6. Rapid 360 degree imaging and stitching of 3D objects using multiple precision 3D cameras

    NASA Astrophysics Data System (ADS)

    Lu, Thomas; Yin, Stuart; Zhang, Jianzhong; Li, Jiangan; Wu, Frank

    2008-02-01

    In this paper, we present the system architecture of a 360 degree view 3D imaging system. The system consists of multiple 3D sensors synchronized to take 3D images around the object. Each 3D camera employs a single high-resolution digital camera and a color-coded light projector. The cameras are synchronized to rapidly capture the 3D and color information of a static object or a live person. The color encoded structure lighting ensures the precise reconstruction of the depth of the object. A 3D imaging system architecture is presented. The architecture employs the displacement of the camera and the projector to triangulate the depth information. The 3D camera system has achieved high depth resolution down to 0.1mm on a human head sized object and 360 degree imaging capability.

  7. Quality of 3D Models Generated by SFM Technology

    NASA Astrophysics Data System (ADS)

    Marčiš, Marián

    2013-12-01

    Using various types of automation in digital photogrammetry is associated with questions such as the accuracy of a 3D model generated on various types of surfaces and textures, the financial costs of the equipment needed, and also the time costs of the processing. This topic deals with the actual technology of computer vision, which allows the automated exterior orientation of images, camera calibration, and the generation of 3D models directly from images of the object itself, based on the automatic detection of significant points. Detailed testing is done using the Agisoft PhotoScan system, and the camera configuration is solved with respect to the accuracy of the 3D model generated and the time consumption of the calculations for the different types of textures and the different settings for the processing.

  8. 3D Buildings Extraction from Aerial Images

    NASA Astrophysics Data System (ADS)

    Melnikova, O.; Prandi, F.

    2011-09-01

    This paper introduces a semi-automatic method for buildings extraction through multiple-view aerial image analysis. The advantage of the used semi-automatic approach is that it allows processing of each building individually finding the parameters of buildings features extraction more precisely for each area. On the early stage the presented technique uses an extraction of line segments that is done only inside of areas specified manually. The rooftop hypothesis is used further to determine a subset of quadrangles, which could form building roofs from a set of extracted lines and corners obtained on the previous stage. After collecting of all potential roof shapes in all images overlaps, the epipolar geometry is applied to find matching between images. This allows to make an accurate selection of building roofs removing false-positive ones and to identify their global 3D coordinates given camera internal parameters and coordinates. The last step of the image matching is based on geometrical constraints in contrast to traditional correlation. The correlation is applied only in some highly restricted areas in order to find coordinates more precisely, in such a way significantly reducing processing time of the algorithm. The algorithm has been tested on a set of Milan's aerial images and shows highly accurate results.

  9. A miniature high resolution 3-D imaging sonar.

    PubMed

    Josserand, Tim; Wolley, Jason

    2011-04-01

    This paper discusses the design and development of a miniature, high resolution 3-D imaging sonar. The design utilizes frequency steered phased arrays (FSPA) technology. FSPAs present a small, low-power solution to the problem of underwater imaging sonars. The technology provides a method to build sonars with a large number of beams without the proportional power, circuitry and processing complexity. The design differs from previous methods in that the array elements are manufactured from a monolithic material. With this technique the arrays are flat and considerably smaller element dimensions are achievable which allows for higher frequency ranges and smaller array sizes. In the current frequency range, the demonstrated array has ultra high image resolution (1″ range×1° azimuth×1° elevation) and small size (<3″×3″). The design of the FSPA utilizes the phasing-induced frequency-dependent directionality of a linear phased array to produce multiple beams in a forward sector. The FSPA requires only two hardware channels per array and can be arranged in single and multiple array configurations that deliver wide sector 2-D images. 3-D images can be obtained by scanning the array in a direction perpendicular to the 2-D image field and applying suitable image processing to the multiple scanned 2-D images. This paper introduces the 3-D FSPA concept, theory and design methodology. Finally, results from a prototype array are presented and discussed. PMID:21112066

  10. Use of Low-cost 3-D Images in Teaching Gross Anatomy.

    ERIC Educational Resources Information Center

    Richards, Boyd F.; And Others

    1987-01-01

    With advances in computer technology, it has become possible to create three-dimensional (3-D) images of anatomical structures for use in teaching gross anatomy. Reported is a survey of attitudes of 91 first-year medical students toward the use of 3-D images in their anatomy course. Reactions to the 3-D images and suggestions for improvement are…

  11. Coherence cube technology adds geologic insight to 3-D data

    SciTech Connect

    Morris, D.

    1997-05-01

    Three-dimensional (3-D) seismic technology is now widely applied to assess the risk associated with hydrocarbon trap definition, including faulting, stratigraphic features, and reservoir description. Critical new technologies to exploit the wealth of information contained within 3-D seismic have recently begun to emerge; most notably, coherence cube technology, developed by Amoco Production Research and licensed to Coherence Technology Co. (CTC). Coherence cube processing produces interpretable images of faults and subtle stratigraphic features, such as buried deltas, river channels, and beaches, by quantifying seismic coherence attributes. The technique has important implications for geophysical, geological, and reservoir engineering applications. The paper discusses how coherency works, applications, and an example in delineating southern North Sea faulting.

  12. Photogrammetric 3D reconstruction using mobile imaging

    NASA Astrophysics Data System (ADS)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  13. 3D reconstruction based on CT image and its application

    NASA Astrophysics Data System (ADS)

    Zhang, Jianxun; Zhang, Mingmin

    2004-03-01

    Reconstitute the 3-D model of the liver and its internal piping system and simulation of the liver surgical operation can increase the accurate and security of the liver surgical operation, attain a purpose for the biggest limit decrease surgical operation wound, shortening surgical operation time, increasing surgical operation succeeding rate, reducing medical treatment expenses and promoting patient recovering from illness. This text expatiated technology and method that the author constitutes 3-D the model of the liver and its internal piping system and simulation of the liver surgical operation according to the images of CT. The direct volume rendering method establishes 3D the model of the liver. Under the environment of OPENGL adopt method of space point rendering to display liver's internal piping system and simulation of the liver surgical operation. Finally, we adopt the wavelet transform method compressed the medical image data.

  14. Pathways for Learning from 3D Technology

    ERIC Educational Resources Information Center

    Carrier, L. Mark; Rab, Saira S.; Rosen, Larry D.; Vasquez, Ludivina; Cheever, Nancy A.

    2012-01-01

    The purpose of this study was to find out if 3D stereoscopic presentation of information in a movie format changes a viewer's experience of the movie content. Four possible pathways from 3D presentation to memory and learning were considered: a direct connection based on cognitive neuroscience research; a connection through "immersion" in that 3D…

  15. 3-D MAPPING TECHNOLOGIES FOR HIGH LEVEL WASTE TANKS

    SciTech Connect

    Marzolf, A.; Folsom, M.

    2010-08-31

    This research investigated four techniques that could be applicable for mapping of solids remaining in radioactive waste tanks at the Savannah River Site: stereo vision, LIDAR, flash LIDAR, and Structure from Motion (SfM). Stereo vision is the least appropriate technique for the solids mapping application. Although the equipment cost is low and repackaging would be fairly simple, the algorithms to create a 3D image from stereo vision would require significant further development and may not even be applicable since stereo vision works by finding disparity in feature point locations from the images taken by the cameras. When minimal variation in visual texture exists for an area of interest, it becomes difficult for the software to detect correspondences for that object. SfM appears to be appropriate for solids mapping in waste tanks. However, equipment development would be required for positioning and movement of the camera in the tank space to enable capturing a sequence of images of the scene. Since SfM requires the identification of distinctive features and associates those features to their corresponding instantiations in the other image frames, mockup testing would be required to determine the applicability of SfM technology for mapping of waste in tanks. There may be too few features to track between image frame sequences to employ the SfM technology since uniform appearance may exist when viewing the remaining solids in the interior of the waste tanks. Although scanning LIDAR appears to be an adequate solution, the expense of the equipment ($80,000-$120,000) and the need for further development to allow tank deployment may prohibit utilizing this technology. The development would include repackaging of equipment to permit deployment through the 4-inch access ports and to keep the equipment relatively uncontaminated to allow use in additional tanks. 3D flash LIDAR has a number of advantages over stereo vision, scanning LIDAR, and SfM, including full frame

  16. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  17. 3D wavefront image formation for NIITEK GPR

    NASA Astrophysics Data System (ADS)

    Soumekh, Mehrdad; Ton, Tuan; Howard, Pete

    2009-05-01

    The U.S. Department of Defense Humanitarian Demining (HD) Research and Development Program focuses on developing, testing, demonstrating, and validating new technology for immediate use in humanitarian demining operations around the globe. Beginning in the late 1990's, the U.S. Army Countermine Division funded the development of the NIITEK ground penetrating radar (GPR) for detection of anti-tank (AT) landmines. This work is concerned with signal processing algorithms to suppress sources of artifacts in the NIITEK GPR, and formation of three-dimensional (3D) imagery from the resultant data. We first show that the NIITEK GPR data correspond to a 3D Synthetic Aperture Radar (SAR) database. An adaptive filtering method is utilized to suppress ground return and self-induced resonance (SIR) signals that are generated by the interaction of the radar-carrying platform and the transmitted radar signal. We examine signal processing methods to improve the fidelity of imagery for this 3D SAR system using pre-processing methods that suppress Doppler aliasing as well as other side lobe leakage artifacts that are introduced by the radar radiation pattern. The algorithm, known as digital spotlighting, imposes a filtering scheme on the azimuth-compressed SAR data, and manipulates the resultant spectral data to achieve a higher PRF to suppress the Doppler aliasing. We also present the 3D version of the Fourier-based wavefront reconstruction, a computationally-efficient and approximation-free SAR imaging method, for image formation with the NIITEK 3D SAR database.

  18. Evolution of 3D surface imaging systems in facial plastic surgery.

    PubMed

    Tzou, Chieh-Han John; Frey, Manfred

    2011-11-01

    Recent advancements in computer technologies have propelled the development of 3D imaging systems. 3D surface-imaging is taking surgeons to a new level of communication with patients; moreover, it provides quick and standardized image documentation. This article recounts the chronologic evolution of 3D surface imaging, and summarizes the current status of today's facial surface capturing technology. This article also discusses current 3D surface imaging hardware and software, and their different techniques, technologies, and scientific validation, which provides surgeons with the background information necessary for evaluating the systems and knowledge about the systems they might incorporate into their own practice. PMID:22004854

  19. Critical comparison of 3D imaging approaches

    SciTech Connect

    Bennett, C L

    1999-06-03

    Currently three imaging spectrometer architectures, tunable filter, dispersive, and Fourier transform, are viable for imaging the universe in three dimensions. There are domains of greatest utility for each of these architectures. The optimum choice among the various alternative architectures is dependent on the nature of the desired observations, the maturity of the relevant technology, and the character of the backgrounds. The domain appropriate for each of the alternatives is delineated; both for instruments having ideal performance as well as for instrumentation based on currently available technology. The environment and science objectives for the Next Generation Space Telescope will be used as a specific representative case to provide a basis for comparison of the various alternatives.

  20. Designing 3D Mesenchymal Stem Cell Sheets Merging Magnetic and Fluorescent Features: When Cell Sheet Technology Meets Image-Guided Cell Therapy

    PubMed Central

    Rahmi, Gabriel; Pidial, Laetitia; Silva, Amanda K. A.; Blondiaux, Eléonore; Meresse, Bertrand; Gazeau, Florence; Autret, Gwennhael; Balvay, Daniel; Cuenod, Charles André; Perretta, Silvana; Tavitian, Bertrand; Wilhelm, Claire; Cellier, Christophe; Clément, Olivier

    2016-01-01

    Cell sheet technology opens new perspectives in tissue regeneration therapy by providing readily implantable, scaffold-free 3D tissue constructs. Many studies have focused on the therapeutic effects of cell sheet implantation while relatively little attention has concerned the fate of the implanted cells in vivo. The aim of the present study was to track longitudinally the cells implanted in the cell sheets in vivo in target tissues. To this end we (i) endowed bone marrow-derived mesenchymal stem cells (BMMSCs) with imaging properties by double labeling with fluorescent and magnetic tracers, (ii) applied BMMSC cell sheets to a digestive fistula model in mice, (iii) tracked the BMMSC fate in vivo by MRI and probe-based confocal laser endomicroscopy (pCLE), and (iv) quantified healing of the fistula. We show that image-guided longitudinal follow-up can document both the fate of the cell sheet-derived BMMSCs and their healing capacity. Moreover, our theranostic approach informs on the mechanism of action, either directly by integration of cell sheet-derived BMMSCs into the host tissue or indirectly through the release of signaling molecules in the host tissue. Multimodal imaging and clinical evaluation converged to attest that cell sheet grafting resulted in minimal clinical inflammation, improved fistula healing, reduced tissue fibrosis and enhanced microvasculature density. At the molecular level, cell sheet transplantation induced an increase in the expression of anti-inflammatory cytokines (TGF-ß2 and IL-10) and host intestinal growth factors involved in tissue repair (EGF and VEGF). Multimodal imaging is useful for tracking cell sheets and for noninvasive follow-up of their regenerative properties. PMID:27022420

  1. Concurrent 3-D motion segmentation and 3-D interpretation of temporal sequences of monocular images.

    PubMed

    Sekkati, Hicham; Mitiche, Amar

    2006-03-01

    The purpose of this study is to investigate a variational method for joint multiregion three-dimensional (3-D) motion segmentation and 3-D interpretation of temporal sequences of monocular images. Interpretation consists of dense recovery of 3-D structure and motion from the image sequence spatiotemporal variations due to short-range image motion. The method is direct insomuch as it does not require prior computation of image motion. It allows movement of both viewing system and multiple independently moving objects. The problem is formulated following a variational statement with a functional containing three terms. One term measures the conformity of the interpretation within each region of 3-D motion segmentation to the image sequence spatiotemporal variations. The second term is of regularization of depth. The assumption that environmental objects are rigid accounts automatically for the regularity of 3-D motion within each region of segmentation. The third and last term is for the regularity of segmentation boundaries. Minimization of the functional follows the corresponding Euler-Lagrange equations. This results in iterated concurrent computation of 3-D motion segmentation by curve evolution, depth by gradient descent, and 3-D motion by least squares within each region of segmentation. Curve evolution is implemented via level sets for topology independence and numerical stability. This algorithm and its implementation are verified on synthetic and real image sequences. Viewers presented with anaglyphs of stereoscopic images constructed from the algorithm's output reported a strong perception of depth. PMID:16519351

  2. The 3D visualization technology research of submarine pipeline based Horde3D GameEngine

    NASA Astrophysics Data System (ADS)

    Yao, Guanghui; Ma, Xiushui; Chen, Genlang; Ye, Lingjian

    2013-10-01

    With the development of 3D display and virtual reality technology, its application gets more and more widespread. This paper applies 3D display technology to the monitoring of submarine pipeline. We reconstruct the submarine pipeline and its surrounding submarine terrain in computer using Horde3D graphics rendering engine on the foundation database "submarine pipeline and relative landforms landscape synthesis database" so as to display the virtual scene of submarine pipeline based virtual reality and show the relevant data collected from the monitoring of submarine pipeline.

  3. 3D spatial resolution and spectral resolution of interferometric 3D imaging spectrometry.

    PubMed

    Obara, Masaki; Yoshimori, Kyu

    2016-04-01

    Recently developed interferometric 3D imaging spectrometry (J. Opt. Soc. Am A18, 765 [2001]1084-7529JOAOD610.1364/JOSAA.18.000765) enables obtainment of the spectral information and 3D spatial information for incoherently illuminated or self-luminous object simultaneously. Using this method, we can obtain multispectral components of complex holograms, which correspond directly to the phase distribution of the wavefronts propagated from the polychromatic object. This paper focuses on the analysis of spectral resolution and 3D spatial resolution in interferometric 3D imaging spectrometry. Our analysis is based on a novel analytical impulse response function defined over four-dimensional space. We found that the experimental results agree well with the theoretical prediction. This work also suggests a new criterion and estimate method regarding 3D spatial resolution of digital holography. PMID:27139648

  4. Vhrs Stereo Images for 3d Modelling of Buildings

    NASA Astrophysics Data System (ADS)

    Bujakiewicz, A.; Holc, M.

    2012-07-01

    The paper presents the project which was carried out in the Photogrammetric Laboratory of Warsaw University of Technology. The experiment is concerned with the extraction of 3D vector data for buildings creation from 3D photogrammetric model based on the Ikonos stereo images. The model was reconstructed with photogrammetric workstation - Summit Evolution combined with ArcGIS 3D platform. Accuracy of 3D model was significantly improved by use for orientation of pair of satellite images the stereo measured tie points distributed uniformly around the model area in addition to 5 control points. The RMS for model reconstructed on base of the RPC coefficients only were 16,6 m, 2,7 m and 47,4 m, for X, Y and Z coordinates, respectively. By addition of 5 control points the RMS were improved to 0,7 m, 0,7 m 1,0 m, where the best results were achieved when RMS were estimated from deviations in 17 check points (with 5 control points)and amounted to 0,4 m, 0,5 m and 0,6 m, for X, Y, and Z respectively. The extracted 3D vector data for buildings were integrated with 2D data of the ground footprints and afterwards they were used for 3D modelling of buildings in Google SketchUp software. The final results were compared with the reference data obtained from other sources. It was found that the shape of buildings (in concern to the number of details) had been reconstructed on level of LoD1, when the accuracy of these models corresponded to the level of LoD2.

  5. Large distance 3D imaging of hidden objects

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  6. Automatic 2D-to-3D image conversion using 3D examples from the internet

    NASA Astrophysics Data System (ADS)

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  7. Recent development of 3D display technology for new market

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Sik

    2003-11-01

    A multi-view 3D video processor was designed and implemented with several FPGAs for real-time applications and a projection-type 3D display was introduced for low-cost commercialization. One high resolution projection panel and only one projection lens is capable of displaying multiview autostereoscopic images. It can cope with various arrangements of 3D camera systems (or pixel arrays) and resolutions of 3D displays. This system shows high 3-D image quality in terms of resolution, brightness, and contrast so it is well suited for the commercialization in the field of game and advertisement market.

  8. Advanced 3D imaging lidar concepts for long range sensing

    NASA Astrophysics Data System (ADS)

    Gordon, K. J.; Hiskett, P. A.; Lamb, R. A.

    2014-06-01

    Recent developments in 3D imaging lidar are presented. Long range 3D imaging using photon counting is now a possibility, offering a low-cost approach to integrated remote sensing with step changing advantages in size, weight and power compared to conventional analogue active imaging technology. We report results using a Geiger-mode array for time-of-flight, single photon counting lidar for depth profiling and determination of the shape and size of tree canopies and distributed surface reflections at a range of 9km, with 4μJ pulses with a frame rate of 100kHz using a low-cost fibre laser operating at a wavelength of λ=1.5 μm. The range resolution is less than 4cm providing very high depth resolution for target identification. This specification opens up several additional functionalities for advanced lidar, for example: absolute rangefinding and depth profiling for long range identification, optical communications, turbulence sensing and time-of-flight spectroscopy. Future concepts for 3D time-of-flight polarimetric and multispectral imaging lidar, with optical communications in a single integrated system are also proposed.

  9. A 3D image analysis tool for SPECT imaging

    NASA Astrophysics Data System (ADS)

    Kontos, Despina; Wang, Qiang; Megalooikonomou, Vasileios; Maurer, Alan H.; Knight, Linda C.; Kantor, Steve; Fisher, Robert S.; Simonian, Hrair P.; Parkman, Henry P.

    2005-04-01

    We have developed semi-automated and fully-automated tools for the analysis of 3D single-photon emission computed tomography (SPECT) images. The focus is on the efficient boundary delineation of complex 3D structures that enables accurate measurement of their structural and physiologic properties. We employ intensity based thresholding algorithms for interactive and semi-automated analysis. We also explore fuzzy-connectedness concepts for fully automating the segmentation process. We apply the proposed tools to SPECT image data capturing variation of gastric accommodation and emptying. These image analysis tools were developed within the framework of a noninvasive scintigraphic test to measure simultaneously both gastric emptying and gastric volume after ingestion of a solid or a liquid meal. The clinical focus of the particular analysis was to probe associations between gastric accommodation/emptying and functional dyspepsia. Employing the proposed tools, we outline effectively the complex three dimensional gastric boundaries shown in the 3D SPECT images. We also perform accurate volume calculations in order to quantitatively assess the gastric mass variation. This analysis was performed both with the semi-automated and fully-automated tools. The results were validated against manual segmentation performed by a human expert. We believe that the development of an automated segmentation tool for SPECT imaging of the gastric volume variability will allow for other new applications of SPECT imaging where there is a need to evaluate complex organ function or tumor masses.

  10. 3-D Packaging: A Technology Review

    NASA Technical Reports Server (NTRS)

    Strickland, Mark; Johnson, R. Wayne; Gerke, David

    2005-01-01

    Traditional electronics are assembled as a planar arrangement of components on a printed circuit board (PCB) or other type of substrate. These planar assemblies may then be plugged into a motherboard or card cage creating a volume of electronics. This architecture is common in many military and space electronic systems as well as large computer and telecommunications systems and industrial electronics. The individual PCB assemblies can be replaced if defective or for system upgrade. Some applications are constrained by the volume or the shape of the system and are not compatible with the motherboard or card cage architecture. Examples include missiles, camcorders, and digital cameras. In these systems, planar rigid-flex substrates are folded to create complex 3-D shapes. The flex circuit serves the role of motherboard, providing interconnection between the rigid boards. An example of a planar rigid - flex assembly prior to folding is shown. In both architectures, the interconnection is effectively 2-D.

  11. 3D Imaging with Holographic Tomography

    NASA Astrophysics Data System (ADS)

    Sheppard, Colin J. R.; Kou, Shan Shan

    2010-04-01

    There are two main types of tomography that enable the 3D internal structures of objects to be reconstructed from scattered data. The commonly known computerized tomography (CT) give good results in the x-ray wavelength range where the filtered back-projection theorem and Radon transform can be used. These techniques rely on the Fourier projection-slice theorem where rays are considered to propagate straight through the object. Another type of tomography called `diffraction tomography' applies in applications in optics and acoustics where diffraction and scattering effects must be taken into account. The latter proves to be a more difficult problem, as light no longer travels straight through the sample. Holographic tomography is a popular way of performing diffraction tomography and there has been active experimental research on reconstructing complex refractive index data using this approach recently. However, there are two distinct ways of doing tomography: either by rotation of the object or by rotation of the illumination while fixing the detector. The difference between these two setups is intuitive but needs to be quantified. From Fourier optics and information transformation point of view, we use 3D transfer function analysis to quantitatively describe how spatial frequencies of the object are mapped to the Fourier domain. We first employ a paraxial treatment by calculating the Fourier transform of the defocused OTF. The shape of the calculated 3D CTF for tomography, by scanning the illumination in one direction only, takes on a form that we might call a 'peanut,' compared to the case of object rotation, where a diablo is formed, the peanut exhibiting significant differences and non-isotropy. In particular, there is a line singularity along one transverse direction. Under high numerical aperture conditions, the paraxial treatment is not accurate, and so we make use of 3D analytical geometry to calculate the behaviour in the non-paraxial case. This time, we

  12. Performance assessment of 3D surface imaging technique for medical imaging applications

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

  13. Stereoscopic display technologies for FHD 3D LCD TV

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Sik; Ko, Young-Ji; Park, Sang-Moo; Jung, Jong-Hoon; Shestak, Sergey

    2010-04-01

    Stereoscopic display technologies have been developed as one of advanced displays, and many TV industrials have been trying commercialization of 3D TV. We have been developing 3D TV based on LCD with LED BLU (backlight unit) since Samsung launched the world's first 3D TV based on PDP. However, the data scanning of panel and LC's response characteristics of LCD TV cause interference among frames (that is crosstalk), and this makes 3D video quality worse. We propose the method to reduce crosstalk by LCD driving and backlight control of FHD 3D LCD TV.

  14. Light field display and 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  15. 3D Holographic Technology and Its Educational Potential

    ERIC Educational Resources Information Center

    Lee, Hyangsook

    2013-01-01

    This article discusses a number of significant developments in 3D holographic technology, its potential to revolutionize aspects of teaching and learning, and challenges of implementing the technology in educational settings.

  16. 3D Imaging with Structured Illumination for Advanced Security Applications

    SciTech Connect

    Birch, Gabriel Carisle; Dagel, Amber Lynn; Kast, Brian A.; Smith, Collin S.

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  17. Full 3D microwave quasi-holographic imaging

    NASA Astrophysics Data System (ADS)

    Castelli, Juan-Carlos; Tardivel, Francois

    A full 3D quasi-holographic image processing technique developed by ONERA is described. A complex backscattering coefficient of a drone scale model was measured for discrete values of the 3D backscattered wave vector in a frequency range between 4.5-8 GHz. The 3D image processing is implemented on a HP 1000 mini-computer and will be part of LASER 2 software to be used in three RCS measurement indoor facilities.

  18. 3D-LZ helicopter ladar imaging system

    NASA Astrophysics Data System (ADS)

    Savage, James; Harrington, Walter; McKinley, R. Andrew; Burns, H. N.; Braddom, Steven; Szoboszlay, Zoltan

    2010-04-01

    A joint-service team led by the Air Force Research Laboratory's Munitions and Sensors Directorates completed a successful flight test demonstration of the 3D-LZ Helicopter LADAR Imaging System. This was a milestone demonstration in the development of technology solutions for a problem known as "helicopter brownout", the loss of situational awareness caused by swirling sand during approach and landing. The 3D-LZ LADAR was developed by H.N. Burns Engineering and integrated with the US Army Aeroflightdynamics Directorate's Brown-Out Symbology System aircraft state symbology aboard a US Army EH-60 Black Hawk helicopter. The combination of these systems provided an integrated degraded visual environment landing solution with landing zone situational awareness as well as aircraft guidance and obstacle avoidance information. Pilots from the U.S. Army, Air Force, Navy, and Marine Corps achieved a 77% landing rate in full brownout conditions at a test range at Yuma Proving Ground, Arizona. This paper will focus on the LADAR technology used in 3D-LZ and the results of this milestone demonstration.

  19. Evaluation of 3D technologies in dentistry.

    PubMed

    Gracco, Antonio; Mazzoli, Alida; Raffaeli, Roberto; Germani, Michele

    2008-01-01

    Quality of service, in terms of improvement in patient satisfaction, is an increasingly important objective in all medical fields, and is especially imperative in orthodontics due to the high numbers of patients treated. Information technology can provide a meaningful contribution to bettering treatment processes, and we maintain that systems such as CAD, CAM and CAE, although initially conceived for industrial purposes, should be evaluated, studied and customized with a view to use in medicine. The present study aims to evaluate Reverse Engineering (RE) and Rapid Prototyping (RP) in order to define an ideal chain of advanced technological solutions to support the critical processes of orthodontic activity. PMID:19294238

  20. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  1. 3D augmented reality with integral imaging display

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  2. 3D Laser Scanning in Technology Education.

    ERIC Educational Resources Information Center

    Flowers, Jim

    2000-01-01

    A three-dimensional laser scanner can be used as a tool for design and problem solving in technology education. A hands-on experience can enhance learning by captivating students' interest and empowering them with creative tools. (Author/JOW)

  3. Interactive 3D imaging technologies: application in advanced methods of jaw bone reconstruction using stem cells/pre-osteoblasts in oral surgery

    PubMed Central

    Wojtowicz, Andrzej; Perek, Jan; Popowski, Wojciech

    2014-01-01

    Cone beam computed tomography has created a specific revolution in maxillofacial imaging, facilitating the transition of diagnosis from 2D to 3D, and expanded the role of imaging from diagnosis to the possibility of actual planning. There are many varieties of cone beam computed tomography-related software available, from basic DICOM viewers to very advanced planning modules, such as InVivo Anatomage, and SimPlant (Materialise Dental). Through the use of these programs scans can be processed into a three-dimensional high-quality simulation which enables planning of the overall treatment. In this article methods of visualization are demonstrated and compared, in the example of 2 cases of reconstruction of advanced jaw bone defects using tissue engineering. Advanced imaging methods allow one to plan a miniinvasive treatment, including assessment of the bone defect's shape and localization, planning a surgical approach and individual graft preparation. PMID:25337171

  4. Designing Virtual Museum Using Web3D Technology

    NASA Astrophysics Data System (ADS)

    Zhao, Jianghai

    VRT was born to have the potentiality of constructing an effective learning environment due to its 3I characteristics: Interaction, Immersion and Imagination. It is now applied in education in a more profound way along with the development of VRT. Virtual Museum is one of the applications. The Virtual Museum is based on the WEB3D technology and extensibility is the most important factor. Considering the advantage and disadvantage of each WEB3D technology, VRML, CULT3D AND VIEWPOINT technologies are chosen. A web chatroom based on flash and ASP technology is also been created in order to make the Virtual Museum an interactive learning environment.

  5. Development of 3D microwave imaging reflectometry in LHD (invited).

    PubMed

    Nagayama, Y; Kuwahara, D; Yoshinaga, T; Hamada, Y; Kogi, Y; Mase, A; Tsuchiya, H; Tsuji-Iio, S; Yamaguchi, S

    2012-10-01

    Three-dimensional (3D) microwave imaging reflectometry has been developed in the large helical device to visualize fluctuating reflection surface which is caused by the density fluctuations. The plasma is illuminated by the probe wave with four frequencies, which correspond to four radial positions. The imaging optics makes the image of cut-off surface onto the 2D (7 × 7 channels) horn antenna mixer arrays. Multi-channel receivers have been also developed using micro-strip-line technology to handle many channels at reasonable cost. This system is first applied to observe the edge harmonic oscillation (EHO), which is an MHD mode with many harmonics that appears in the edge plasma. A narrow structure along field lines is observed during EHO. PMID:23126965

  6. Imaging PVC gas pipes using 3-D GPR

    SciTech Connect

    Bradford, J.; Ramaswamy, M.; Peddy, C.

    1996-11-01

    Over the years, many enhancements have been made by the oil and gas industry to improve the quality of seismic images. The GPR project at GTRI borrows heavily from these technologies in order to produce 3-D GPR images of PVC gas pipes. As will be demonstrated, improvements in GPR data acquisition, 3-D processing and visualization schemes yield good images of PVC pipes in the subsurface. Data have been collected in cooperation with the local gas company and at a test facility in Texas. Surveys were conducted over both a metal pipe and PVC pipes of diameters ranging from {1/2} in. to 4 in. at depths from 1 ft to 3 ft in different soil conditions. The metal pipe produced very good reflections and was used to fine tune and optimize the processing run stream. It was found that the following steps significantly improve the overall image: (1) Statics for drift and topography compensation, (2) Deconvolution, (3) Filtering and automatic gain control, (4) Migration for focusing and resolution, and (5) Visualization optimization. The processing flow implemented is relatively straightforward, simple to execute and robust under varying conditions. Future work will include testing resolution limits, effects of soil conditions, and leak detection.

  7. Triangulation Based 3D Laser Imaging for Fracture Orientation Analysis

    NASA Astrophysics Data System (ADS)

    Mah, J.; Claire, S.; Steve, M.

    2009-05-01

    Laser imaging has recently been identified as a potential tool for rock mass characterization. This contribution focuses on the application of triangulation based, short-range laser imaging to determine fracture orientation and surface texture. This technology measures the distance to the target by triangulating the projected and reflected laser beams, and also records the reflection intensity. In this study, we acquired 3D laser images of rock faces using the Laser Camera System (LCS), a portable instrument developed by Neptec Design Group (Ottawa, Canada). The LCS uses an infrared laser beam and is immune to the lighting conditions. The maximum image resolution is 1024 x 1024 volumetric image elements. Depth resolution is 0.5 mm at 5 m. An above ground field trial was conducted at a blocky road cut with well defined joint sets (Kingston, Ontario). An underground field trial was conducted at the Inco 175 Ore body (Sudbury, Ontario) where images were acquired in the dark and the joint set features were more subtle. At each site, from a distance of 3 m away from the rock face, a grid of six images (approximately 1.6 m by 1.6 m) was acquired at maximum resolution with 20% overlap between adjacent images. This corresponds to a density of 40 image elements per square centimeter. Polyworks, a high density 3D visualization software tool, was used to align and merge the images into a single digital triangular mesh. The conventional method of determining fracture orientations is by manual measurement using a compass. In order to be accepted as a substitute for this method, the LCS should be capable of performing at least to the capabilities of manual measurements. To compare fracture orientation estimates derived from the 3D laser images to manual measurements, 160 inclinometer readings were taken at the above ground site. Three prominent joint sets (strike/dip: 236/09, 321/89, 325/01) were identified by plotting the joint poles on a stereonet. Underground, two main joint

  8. Current status of stereoscopic 3D LCD TV technologies

    NASA Astrophysics Data System (ADS)

    Choi, Hee-Jin

    2011-06-01

    The year 2010 may be recorded as a first year of successful commercial 3D products. Among them, the 3D LCD TVs are expected to be the major one regarding the sales volume. In this paper, the principle of current stereoscopic 3D LCD TV techniques and the required flat panel display (FPD) technologies for the realization of them are reviewed. [Figure not available: see fulltext.

  9. 3D model-based still image object categorization

    NASA Astrophysics Data System (ADS)

    Petre, Raluca-Diana; Zaharia, Titus

    2011-09-01

    This paper proposes a novel recognition scheme algorithm for semantic labeling of 2D object present in still images. The principle consists of matching unknown 2D objects with categorized 3D models in order to infer the semantics of the 3D object to the image. We tested our new recognition framework by using the MPEG-7 and Princeton 3D model databases in order to label unknown images randomly selected from the web. Results obtained show promising performances, with recognition rate up to 84%, which opens interesting perspectives in terms of semantic metadata extraction from still images/videos.

  10. 3D Cell Culture Imaging with Digital Holographic Microscopy

    NASA Astrophysics Data System (ADS)

    Dimiduk, Thomas; Nyberg, Kendra; Almeda, Dariela; Koshelva, Ekaterina; McGorty, Ryan; Kaz, David; Gardel, Emily; Auguste, Debra; Manoharan, Vinothan

    2011-03-01

    Cells in higher organisms naturally exist in a three dimensional (3D) structure, a fact sometimes ignored by in vitro biological research. Confinement to a two dimensional culture imposes significant deviations from the native 3D state. One of the biggest obstacles to wider use of 3D cultures is the difficulty of 3D imaging. The confocal microscope, the dominant 3D imaging instrument, is expensive, bulky, and light-intensive; live cells can be observed for only a short time before they suffer photodamage. We present an alternative 3D imaging techinque, digital holographic microscopy, which can capture 3D information with axial resolution better than 2 μm in a 100 μm deep volume. Capturing a 3D image requires only a single camera exposure with a sub-millisecond laser pulse, allowing us to image cell cultures using five orders of magnitude less light energy than with confocal. This can be done with hardware costing ~ 1000. We use the instrument to image growth of MCF7 breast cancer cells and p. pastoras yeast. We acknowledge support from NSF GRFP.

  11. 3D imaging using projected dynamic fringes

    NASA Astrophysics Data System (ADS)

    Shaw, Michael M.; Atkinson, John T.; Harvey, David M.; Hobson, Clifford A.; Lalor, Michael J.

    1994-12-01

    An instrument capable of highly accurate, non-contact range measurement has been developed, which is based upon the principle of projected rotating fringes. More usually known as dynamic fringe projection, it is this technique which is exploited in the dynamic automated range transducer (DART). The intensity waveform seen at the target and sensed by the detector, contains all the information required to accurately determine the fringe order. This, in turn, allows the range to be evaluated by the substitution of the fringe order into a simple algebraic expression. Various techniques for the analysis of the received intensity signals from the surface of the target have been investigated. The accuracy to which the range can be determined ultimately depends upon the accuracy to which the fringe order can be evaluated from the received intensity waveform. It is extremely important to be able to closely determine the fractional fringe order value, to achieve any meaningful results. This paper describes a number of techniques which have been used to analyze the intensity waveform, and critically appraises their suitability in terms of accuracy and required speed of operation. This work also examines the development of this instrument for three-dimensional measurements based on single or two beam systems. Using CCD array detectors, a 3-D range map of the object's surface may be produced.

  12. Imaging hypoxia using 3D photoacoustic spectroscopy

    NASA Astrophysics Data System (ADS)

    Stantz, Keith M.

    2010-02-01

    Purpose: The objective is to develop a multivariate in vivo hemodynamic model of tissue oxygenation (MiHMO2) based on 3D photoacoustic spectroscopy. Introduction: Low oxygen levels, or hypoxia, deprives cancer cells of oxygen and confers resistance to irradiation, some chemotherapeutic drugs, and oxygen-dependent therapies (phototherapy) leading to treatment failure and poor disease-free and overall survival. For example, clinical studies of patients with breast carcinomas, cervical cancer, and head and neck carcinomas (HNC) are more likely to suffer local reoccurrence and metastasis if their tumors are hypoxic. A novel method to non invasively measure tumor hypoxia, identify its type, and monitor its heterogeneity is devised by measuring tumor hemodynamics, MiHMO2. Material and Methods: Simulations are performed to compare tumor pO2 levels and hypoxia based on physiology - perfusion, fractional plasma volume, fractional cellular volume - and its hemoglobin status - oxygen saturation and hemoglobin concentration - based on in vivo measurements of breast, prostate, and ovarian tumors. Simulations of MiHMO2 are performed to assess the influence of scanner resolutions and different mathematic models of oxygen delivery. Results: Sensitivity of pO2 and hypoxic fraction to photoacoustic scanner resolution and dependencies on model complexity will be presented using hemodynamic parameters for different tumors. Conclusions: Photoacoustic CT spectroscopy provides a unique ability to monitor hemodynamic and cellular physiology in tissue, which can be used to longitudinally monitor tumor oxygenation and its response to anti-angiogenic therapies.

  13. Highway 3D model from image and lidar data

    NASA Astrophysics Data System (ADS)

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  14. Compression of 3D integral images using wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Mazri, Meriem; Aggoun, Amar

    2003-06-01

    This paper presents a wavelet-based lossy compression technique for unidirectional 3D integral images (UII). The method requires the extraction of different viewpoint images from the integral image. A single viewpoint image is constructed by extracting one pixel from each microlens, then each viewpoint image is decomposed using a Two Dimensional Discrete Wavelet Transform (2D-DWT). The resulting array of coefficients contains several frequency bands. The lower frequency bands of the viewpoint images are assembled and compressed using a 3 Dimensional Discrete Cosine Transform (3D-DCT) followed by Huffman coding. This will achieve decorrelation within and between 2D low frequency bands from the different viewpoint images. The remaining higher frequency bands are Arithmetic coded. After decoding and decompression of the viewpoint images using an inverse 3D-DCT and an inverse 2D-DWT, each pixel from every reconstructed viewpoint image is put back into its original position within the microlens to reconstruct the whole 3D integral image. Simulations were performed on a set of four different grey level 3D UII using a uniform scalar quantizer with deadzone. The results for the average of the four UII intensity distributions are presented and compared with previous use of 3D-DCT scheme. It was found that the algorithm achieves better rate-distortion performance, with respect to compression ratio and image quality at very low bit rates.

  15. 3D Printing in Zero-G ISS Technology Demonstration

    NASA Technical Reports Server (NTRS)

    Werkheiser, Niki; Cooper, Kenneth C.; Edmunson, Jennifer E.; Dunn, Jason; Snyder, Michael

    2013-01-01

    The National Aeronautics and Space Administration (NASA) has a long term strategy to fabricate components and equipment on-demand for manned missions to the Moon, Mars, and beyond. To support this strategy, NASA's Marshall Space Fligth Center (MSFC) and Made in Space, Inc. are developing the 3D Printing In Zero-G payload as a Technology Demonstration for the International Space Station (ISS). The 3D Printing In Zero-G experiment ('3D Print') will be the frst machine to perform 3D printing in space.

  16. 3D scene reconstruction from multi-aperture images

    NASA Astrophysics Data System (ADS)

    Mao, Miao; Qin, Kaihuai

    2014-04-01

    With the development of virtual reality, there is a growing demand for 3D modeling of real scenes. This paper proposes a novel 3D scene reconstruction framework based on multi-aperture images. Our framework consists of four parts. Firstly, images with different apertures are captured via programmable aperture. Secondly, we use SIFT method for feature point matching. Then we exploit binocular stereo vision to calculate camera parameters and 3D positions of matching points, forming a sparse 3D scene model. Finally, we apply patch-based multi-view stereo to obtain a dense 3D scene model. Experimental results show that our method is practical and effective to reconstruct dense 3D scene.

  17. Dedicated 3D photoacoustic breast imaging

    PubMed Central

    Kruger, Robert A.; Kuzmiak, Cherie M.; Lam, Richard B.; Reinecke, Daniel R.; Del Rio, Stephen P.; Steed, Doreen

    2013-01-01

    Purpose: To report the design and imaging methodology of a photoacoustic scanner dedicated to imaging hemoglobin distribution throughout a human breast. Methods: The authors developed a dedicated breast photoacoustic mammography (PAM) system using a spherical detector aperture based on our previous photoacoustic tomography scanner. The system uses 512 detectors with rectilinear scanning. The scan shape is a spiral pattern whose radius varies from 24 to 96 mm, thereby allowing a field of view that accommodates a wide range of breast sizes. The authors measured the contrast-to-noise ratio (CNR) using a target comprised of 1-mm dots printed on clear plastic. Each dot absorption coefficient was approximately the same as a 1-mm thickness of whole blood at 756 nm, the output wavelength of the Alexandrite laser used by this imaging system. The target was immersed in varying depths of an 8% solution of stock Liposyn II-20%, which mimics the attenuation of breast tissue (1.1 cm−1). The spatial resolution was measured using a 6 μm-diameter carbon fiber embedded in agar. The breasts of four healthy female volunteers, spanning a range of breast size from a brassiere C cup to a DD cup, were imaged using a 96-mm spiral protocol. Results: The CNR target was clearly visualized to a depth of 53 mm. Spatial resolution, which was estimated from the full width at half-maximum of a profile across the PAM image of a carbon fiber, was 0.42 mm. In the four human volunteers, the vasculature was well visualized throughout the breast tissue, including to the chest wall. Conclusions: CNR, lateral field-of-view and penetration depth of our dedicated PAM scanning system is sufficient to image breasts as large as 1335 mL, which should accommodate up to 90% of the women in the United States. PMID:24320471

  18. 3-D seismic imaging of complex geologies

    SciTech Connect

    Womble, D.E.; Dosanjh, S.S.; VanDyke, J.P.; Oldfield, R.A.; Greenberg, D.S.

    1995-02-01

    We present three codes for the Intel Paragon that address the problem of three-dimensional seismic imaging of complex geologies. The first code models acoustic wave propagation and can be used to generate data sets to calibrate and validate seismic imaging codes. This code reported the fastest timings for acoustic wave propagation codes at a recent SEG (Society of Exploration Geophysicists) meeting. The second code implements a Kirchhoff method for pre-stack depth migration. Development of this code is almost complete, and preliminary results are presented. The third code implements a wave equation approach to seismic migration and is a Paragon implementation of a code from the ARCO Seismic Benchmark Suite.

  19. 3-D capacitance density imaging system

    DOEpatents

    Fasching, G.E.

    1988-03-18

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

  20. Polarimetric 3D integral imaging in photon-starved conditions.

    PubMed

    Carnicer, Artur; Javidi, Bahram

    2015-03-01

    We develop a method for obtaining 3D polarimetric integral images from elemental images recorded in low light illumination conditions. Since photon-counting images are very sparse, calculation of the Stokes parameters and the degree of polarization should be handled carefully. In our approach, polarimetric 3D integral images are generated using the Maximum Likelihood Estimation and subsequently reconstructed by means of a Total Variation Denoising filter. In this way, polarimetric results are comparable to those obtained in conventional illumination conditions. We also show that polarimetric information retrieved from photon starved images can be used in 3D object recognition problems. To the best of our knowledge, this is the first report on 3D polarimetric photon counting integral imaging. PMID:25836861

  1. Image performance evaluation of a 3D surgical imaging platform

    NASA Astrophysics Data System (ADS)

    Petrov, Ivailo E.; Nikolov, Hristo N.; Holdsworth, David W.; Drangova, Maria

    2011-03-01

    The O-arm (Medtronic Inc.) is a multi-dimensional surgical imaging platform. The purpose of this study was to perform a quantitative evaluation of the imaging performance of the O-arm in an effort to understand its potential for future nonorthopedic applications. Performance of the reconstructed 3D images was evaluated, using a custom-built phantom, in terms of resolution, linearity, uniformity and geometrical accuracy. Both the standard (SD, 13 s) and high definition (HD, 26 s) modes were evaluated, with the imaging parameters set to image the head (120 kVp, 100 mAs and 150 mAs, respectively). For quantitative noise characterization, the images were converted to Hounsfield units (HU) off-line. Measurement of the modulation transfer function revealed a limiting resolution (at 10% level) of 1.0 mm-1 in the axial dimension. Image noise varied between 15 and 19 HU for the HD and SD modes, respectively. Image intensities varied linearly over the measured range, up to 1300 HU. Geometric accuracy was maintained in all three dimensions over the field of view. The present study has evaluated the performance characteristics of the O-arm, and demonstrates feasibility for use in interventional applications and quantitative imaging tasks outside those currently targeted by the manufacturer. Further improvements to the reconstruction algorithms may further enhance performance for lower-contrast applications.

  2. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  3. 3D laser imaging for concealed object identification

    NASA Astrophysics Data System (ADS)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  4. A 3D Level Set Method for Microwave Breast Imaging

    PubMed Central

    Colgan, Timothy J.; Hagness, Susan C.; Van Veen, Barry D.

    2015-01-01

    Objective Conventional inverse-scattering algorithms for microwave breast imaging result in moderate resolution images with blurred boundaries between tissues. Recent 2D numerical microwave imaging studies demonstrate that the use of a level set method preserves dielectric boundaries, resulting in a more accurate, higher resolution reconstruction of the dielectric properties distribution. Previously proposed level set algorithms are computationally expensive and thus impractical in 3D. In this paper we present a computationally tractable 3D microwave imaging algorithm based on level sets. Methods We reduce the computational cost of the level set method using a Jacobian matrix, rather than an adjoint method, to calculate Frechet derivatives. We demonstrate the feasibility of 3D imaging using simulated array measurements from 3D numerical breast phantoms. We evaluate performance by comparing full 3D reconstructions to those from a conventional microwave imaging technique. We also quantitatively assess the efficacy of our algorithm in evaluating breast density. Results Our reconstructions of 3D numerical breast phantoms improve upon those of a conventional microwave imaging technique. The density estimates from our level set algorithm are more accurate than those of conventional microwave imaging, and the accuracy is greater than that reported for mammographic density estimation. Conclusion Our level set method leads to a feasible level of computational complexity for full 3D imaging, and reconstructs the heterogeneous dielectric properties distribution of the breast more accurately than conventional microwave imaging methods. Significance 3D microwave breast imaging using a level set method is a promising low-cost, non-ionizing alternative to current breast imaging techniques. PMID:26011863

  5. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    SciTech Connect

    Dibildox, Gerardo Baka, Nora; Walsum, Theo van; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  6. 3-D Imaging Based, Radiobiological Dosimetry

    PubMed Central

    Sgouros, George; Frey, Eric; Wahl, Richard; He, Bin; Prideaux, Andrew; Hobbs, Robert

    2008-01-01

    Targeted radionuclide therapy holds promise as a new treatment against cancer. Advances in imaging are making it possible to evaluate the spatial distribution of radioactivity in tumors and normal organs over time. Matched anatomical imaging such as combined SPECT/CT and PET/CT have also made it possible to obtain tissue density information in conjunction with the radioactivity distribution. Coupled with sophisticated iterative reconstruction algorithims, these advances have made it possible to perform highly patient-specific dosimetry that also incorporates radiobiological modeling. Such sophisticated dosimetry techniques are still in the research investigation phase. Given the attendant logistical and financial costs, a demonstrated improvement in patient care will be a prerequisite for the adoption of such highly-patient specific internal dosimetry methods. PMID:18662554

  7. Acoustic 3D imaging of dental structures

    SciTech Connect

    Lewis, D.K.; Hume, W.R.; Douglass, G.D.

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  8. MR image denoising method for brain surface 3D modeling

    NASA Astrophysics Data System (ADS)

    Zhao, De-xin; Liu, Peng-jie; Zhang, De-gan

    2014-11-01

    Three-dimensional (3D) modeling of medical images is a critical part of surgical simulation. In this paper, we focus on the magnetic resonance (MR) images denoising for brain modeling reconstruction, and exploit a practical solution. We attempt to remove the noise existing in the MR imaging signal and preserve the image characteristics. A wavelet-based adaptive curve shrinkage function is presented in spherical coordinates system. The comparative experiments show that the denoising method can preserve better image details and enhance the coefficients of contours. Using these denoised images, the brain 3D visualization is given through surface triangle mesh model, which demonstrates the effectiveness of the proposed method.

  9. 3D-spectral domain computational imaging

    NASA Astrophysics Data System (ADS)

    Anderson, Trevor; Segref, Armin; Frisken, Grant; Ferra, Herman; Lorenser, Dirk; Frisken, Steven

    2016-03-01

    We present a proof-of-concept experiment utilizing a novel "snap-shot" spectral domain OCT technique that captures a phase coherent volume in a single frame. The sample is illuminated with a collimated beam of 75 μm diameter and the back-reflected light is analyzed by a 2-D matrix of spectral interferograms. A key challenge that is addressed is simultaneously maintaining lateral and spectral phase coherence over the imaged volume in the presence of sample motion. Digital focusing is demonstrated for 5.0 μm lateral resolution over an 800 μm axial range.

  10. Morphometrics, 3D Imaging, and Craniofacial Development.

    PubMed

    Hallgrimsson, Benedikt; Percival, Christopher J; Green, Rebecca; Young, Nathan M; Mio, Washington; Marcucio, Ralph

    2015-01-01

    Recent studies have shown how volumetric imaging and morphometrics can add significantly to our understanding of morphogenesis, the developmental basis for variation, and the etiology of structural birth defects. On the other hand, the complex questions and diverse imaging data in developmental biology present morphometrics with more complex challenges than applications in virtually any other field. Meeting these challenges is necessary in order to understand the mechanistic basis for variation in complex morphologies. This chapter reviews the methods and theory that enable the application of modern landmark-based morphometrics to developmental biology and craniofacial development, in particular. We discuss the theoretical foundations of morphometrics as applied to development and review the basic approaches to the quantification of morphology. Focusing on geometric morphometrics, we discuss the principal statistical methods for quantifying and comparing morphological variation and covariation structure within and among groups. Finally, we discuss the future directions for morphometrics in developmental biology that will be required for approaches that enable quantitative integration across the genotype-phenotype map. PMID:26589938

  11. Recognition technology research based on 3D fingerprint

    NASA Astrophysics Data System (ADS)

    Tian, Qianxiao; Huang, Shujun; Zhang, Zonghua

    2014-11-01

    Fingerprint has been widely studied and applied to personal recognition in both forensics and civilian. However, the current widespread used fingerprint is identified by 2D (two-dimensional) fingerprint image and the mapping from 3D (three-dimensional) to 2D loses 1D information, which leads to low accurate and even wrong recognition. This paper presents a 3D fingerprint recognition method based on the fringe projection technique. A series of fringe patterns generated by software are projected onto a finger surface through a projecting system. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. The deformed fringe pattern images give the 3D shape data of the finger and the 3D fingerprint features. Through converting the 3D fingerprints to 2D space, traditional 2D fingerprint recognition method can be used to 3D fingerprints recognition. Experimental results on measuring and recognizing some 3D fingerprints show the accuracy and availability of the developed 3D fingerprint system.

  12. 3D quantitative phase imaging of neural networks using WDT

    NASA Astrophysics Data System (ADS)

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  13. Recent progress in 3-D imaging of sea freight containers

    NASA Astrophysics Data System (ADS)

    Fuchs, Theobald; Schön, Tobias; Dittmann, Jonas; Sukowski, Frank; Hanke, Randolf

    2015-03-01

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today's 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  14. Recent progress in 3-D imaging of sea freight containers

    SciTech Connect

    Fuchs, Theobald Schön, Tobias Sukowski, Frank; Dittmann, Jonas; Hanke, Randolf

    2015-03-31

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today’s 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  15. Accommodation response measurements for integral 3D image

    NASA Astrophysics Data System (ADS)

    Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

    2014-03-01

    We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

  16. 3D Printing technologies for drug delivery: a review.

    PubMed

    Prasad, Leena Kumari; Smyth, Hugh

    2016-07-01

    With the FDA approval of the first 3D printed tablet, Spritam®, there is now precedence set for the utilization of 3D printing for the preparation of drug delivery systems. The capabilities for dispensing low volumes with accuracy, precise spatial control and layer-by-layer assembly allow for the preparation of complex compositions and geometries. The high degree of flexibility and control with 3D printing enables the preparation of dosage forms with multiple active pharmaceutical ingredients with complex and tailored release profiles. A unique opportunity for this technology for the preparation of personalized doses to address individual patient needs. This review will highlight the 3D printing technologies being utilized for the fabrication of drug delivery systems, as well as the formulation and processing parameters for consideration. This article will also summarize the range of dosage forms that have been prepared using these technologies, specifically over the last 10 years. PMID:26625986

  17. Quantitative 3D Optical Imaging: Applications in Dosimetry and Biophysics

    NASA Astrophysics Data System (ADS)

    Thomas, Andrew Stephen

    Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of ˜60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling ˜10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of

  18. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  19. 3D-printing technologies for electrochemical applications.

    PubMed

    Ambrosi, Adriano; Pumera, Martin

    2016-05-21

    Since its conception during the 80s, 3D-printing, also known as additive manufacturing, has been receiving unprecedented levels of attention and interest from industry and research laboratories. This is in addition to end users, who have benefited from the pervasiveness of desktop-size and relatively cheap printing machines available. 3D-printing enables almost infinite possibilities for rapid prototyping. Therefore, it has been considered for applications in numerous research fields, ranging from mechanical engineering, medicine, and materials science to chemistry. Electrochemistry is another branch of science that can certainly benefit from 3D-printing technologies, paving the way for the design and fabrication of cheaper, higher performing, and ubiquitously available electrochemical devices. Here, we aim to provide a general overview of the most commonly available 3D-printing methods along with a review of recent electrochemistry related studies adopting 3D-printing as a possible rapid prototyping fabrication tool. PMID:27048921

  20. Imaging fault zones using 3D seismic image processing techniques

    NASA Astrophysics Data System (ADS)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  1. High-throughput imaging: Focusing in on drug discovery in 3D.

    PubMed

    Li, Linfeng; Zhou, Qiong; Voss, Ty C; Quick, Kevin L; LaBarbera, Daniel V

    2016-03-01

    3D organotypic culture models such as organoids and multicellular tumor spheroids (MCTS) are becoming more widely used for drug discovery and toxicology screening. As a result, 3D culture technologies adapted for high-throughput screening formats are prevalent. While a multitude of assays have been reported and validated for high-throughput imaging (HTI) and high-content screening (HCS) for novel drug discovery and toxicology, limited HTI/HCS with large compound libraries have been reported. Nonetheless, 3D HTI instrumentation technology is advancing and this technology is now on the verge of allowing for 3D HCS of thousands of samples. This review focuses on the state-of-the-art high-throughput imaging systems, including hardware and software, and recent literature examples of 3D organotypic culture models employing this technology for drug discovery and toxicology screening. PMID:26608110

  2. Optical 3D watermark based digital image watermarking for telemedicine

    NASA Astrophysics Data System (ADS)

    Li, Xiao Wei; Kim, Seok Tae

    2013-12-01

    Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.

  3. Fast 3D subsurface imaging with stepped-frequency GPR

    NASA Astrophysics Data System (ADS)

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Sutter, Lena

    2015-05-01

    This paper investigates an algorithm for forming 3D images of the subsurface using stepped-frequency GPR data. The algorithm is specifically designed for a handheld GPR and therefore accounts for the irregular sampling pattern in the data and the spatially-variant air-ground interface by estimating an effective "ground-plane" and then registering the data to the plane. The algorithm efficiently solves the 4th-order polynomial for the Snell reflection points using a fully vectorized iterative scheme. The forward operator is implemented efficiently using an accelerated nonuniform FFT (Greengard and Lee, 2004); the adjoint operator is implemented efficiently using an interpolation step coupled with an upsampled FFT. The imaging is done as a linearized version of the full inverse problem, which is regularized using a sparsity constraint to reduce sidelobes and therefore improve image localization. Applying an appropriate sparsity constraint, the algorithm is able to eliminate most the surrounding clutter and sidelobes, while still rendering valuable image properties such as shape and size. The algorithm is applied to simulated data, controlled experimental data (made available by Dr. Waymond Scott, Georgia Institute of Technology), and government-provided data with irregular sampling and air-ground interface.

  4. Faster, higher quality volume visualization for 3D medical imaging

    NASA Astrophysics Data System (ADS)

    Kalvin, Alan D.; Laine, Andrew F.; Song, Ting

    2008-03-01

    The two major volume visualization methods used in biomedical applications are Maximum Intensity Projection (MIP) and Volume Rendering (VR), both of which involve the process of creating sets of 2D projections from 3D images. We have developed a new method for very fast, high-quality volume visualization of 3D biomedical images, based on the fact that the inverse of this process (transforming 2D projections into a 3D image) is essentially equivalent to tomographic image reconstruction. This new method uses the 2D projections acquired by the scanner, thereby obviating the need for the two computationally expensive steps currently required in the complete process of biomedical visualization, that is, (i) reconstructing the 3D image from 2D projection data, and (ii) computing the set of 2D projections from the reconstructed 3D image As well as improvements in computation speed, this method also results in improvements in visualization quality, and in the case of x-ray CT we can exploit this quality improvement to reduce radiation dosage. In this paper, demonstrate the benefits of developing biomedical visualization techniques by directly processing the sensor data acquired by body scanners, rather than by processing the image data reconstructed from the sensor data. We show results of using this approach for volume visualization for tomographic modalities, like x-ray CT, and as well as for MRI.

  5. Low Dose, Low Energy 3d Image Guidance during Radiotherapy

    NASA Astrophysics Data System (ADS)

    Moore, C. J.; Marchant, T.; Amer, A.; Sharrock, P.; Price, P.; Burton, D.

    2006-04-01

    Patient kilo-voltage X-ray cone beam volumetric imaging for radiotherapy was first demonstrated on an Elekta Synergy mega-voltage X-ray linear accelerator. Subsequently low dose, reduced profile reconstruction imaging was shown to be practical for 3D geometric setup registration to pre-treatment planning images without compromising registration accuracy. Reconstruction from X-ray profiles gathered between treatment beam deliveries was also introduced. The innovation of zonal cone beam imaging promises significantly reduced doses to patients and improved soft tissue contrast in the tumour target zone. These developments coincided with the first dynamic 3D monitoring of continuous body topology changes in patients, at the moment of irradiation, using a laser interferometer. They signal the arrival of low dose, low energy 3D image guidance during radiotherapy itself.

  6. Analysis of Impact of 3D Printing Technology on Traditional Manufacturing Technology

    NASA Astrophysics Data System (ADS)

    Wu, Niyan; Chen, Qi; Liao, Linzhi; Wang, Xin

    With quiet rise of 3D printing technology in automobile, aerospace, industry, medical treatment and other fields, many insiders hold different opinions on its development. This paper objectively analyzes impact of 3D printing technology on mold making technology and puts forward the idea of fusion and complementation of 3D printing technology and mold making technology through comparing advantages and disadvantages of 3D printing mold and traditional mold making technology.

  7. 3D-Holoscopic Imaging: A New Dimension to Enhance Imaging in Minimally Invasive Therapy in Urologic Oncology

    PubMed Central

    Aggoun, Amar; Swash, Mohammad; Grange, Philippe C.R.; Challacombe, Benjamin; Dasgupta, Prokar

    2013-01-01

    Abstract Background and Purpose Existing imaging modalities of urologic pathology are limited by three-dimensional (3D) representation on a two-dimensional screen. We present 3D-holoscopic imaging as a novel method of representing Digital Imaging and Communications in Medicine data images taken from CT and MRI to produce 3D-holographic representations of anatomy without special eyewear in natural light. 3D-holoscopic technology produces images that are true optical models. This technology is based on physical principles with duplication of light fields. The 3D content is captured in real time with the content viewed by multiple viewers independently of their position, without 3D eyewear. Methods We display 3D-holoscopic anatomy relevant to minimally invasive urologic surgery without the need for 3D eyewear. Results The results have demonstrated that medical 3D-holoscopic content can be displayed on commercially available multiview auto-stereoscopic display. Conclusion The next step is validation studies comparing 3D-Holoscopic imaging with conventional imaging. PMID:23216303

  8. 3D thermography imaging standardization technique for inflammation diagnosis

    NASA Astrophysics Data System (ADS)

    Ju, Xiangyang; Nebel, Jean-Christophe; Siebert, J. Paul

    2005-01-01

    We develop a 3D thermography imaging standardization technique to allow quantitative data analysis. Medical Digital Infrared Thermal Imaging is very sensitive and reliable mean of graphically mapping and display skin surface temperature. It allows doctors to visualise in colour and quantify temperature changes in skin surface. The spectrum of colours indicates both hot and cold responses which may co-exist if the pain associate with an inflammatory focus excites an increase in sympathetic activity. However, due to thermograph provides only qualitative diagnosis information, it has not gained acceptance in the medical and veterinary communities as a necessary or effective tool in inflammation and tumor detection. Here, our technique is based on the combination of visual 3D imaging technique and thermal imaging technique, which maps the 2D thermography images on to 3D anatomical model. Then we rectify the 3D thermogram into a view independent thermogram and conform it a standard shape template. The combination of these imaging facilities allows the generation of combined 3D and thermal data from which thermal signatures can be quantified.

  9. SNR analysis of 3D magnetic resonance tomosynthesis (MRT) imaging

    NASA Astrophysics Data System (ADS)

    Kim, Min-Oh; Kim, Dong-Hyun

    2012-03-01

    In conventional 3D Fourier transform (3DFT) MR imaging, signal-to-noise ratio (SNR) is governed by the well-known relationship of being proportional to the voxel size and square root of the imaging time. Here, we introduce an alternative 3D imaging approach, termed MRT (Magnetic Resonance Tomosynthesis), which can generate a set of tomographic MR images similar to multiple 2D projection images in x-ray. A multiple-oblique-view (MOV) pulse sequence is designed to acquire the tomography-like images used in tomosynthesis process and an iterative back-projection (IBP) reconstruction method is used to reconstruct 3D images. SNR analysis is performed and shows that resolution and SNR tradeoff is not governed as with typical 3DFT MR imaging case. The proposed method provides a higher SNR than the conventional 3D imaging method with a partial loss of slice-direction resolution. It is expected that this method can be useful for extremely low SNR cases.

  10. 3D gesture recognition from serial range image

    NASA Astrophysics Data System (ADS)

    Matsui, Yasuyuki; Miyasaka, Takeo; Hirose, Makoto; Araki, Kazuo

    2001-10-01

    In this research, the recognition of gesture in 3D space is examined by using serial range images obtained by a real-time 3D measurement system developed in our laboratory. Using this system, it is possible to obtain time sequences of range, intensity and color data for a moving object in real-time without assigning markers to the targets. At first, gestures are tracked in 2D space by calculating 2D flow vectors at each points using an ordinal optical flow estimation method, based on time sequences of the intensity data. Then, location of each point after 2D movement is detected on the x-y plane using thus obtained 2D flow vectors. Depth information of each point after movement is then obtained from the range data and 3D flow vectors are assigned to each point. Time sequences of thus obtained 3D flow vectors allow us to track the 3D movement of the target. So, based on time sequences of 3D flow vectors of the targets, it is possible to classify the movement of the targets using continuous DP matching technique. This tracking of 3D movement using time sequences of 3D flow vectors may be applicable for a robust gesture recognition system.

  11. Perceptual quality measurement of 3D images based on binocular vision.

    PubMed

    Zhou, Wujie; Yu, Lu

    2015-07-20

    Three-dimensional (3D) technology has become immensely popular in recent years and widely adopted in various applications. Hence, perceptual quality measurement of symmetrically and asymmetrically distorted 3D images has become an important, fundamental, and challenging issue in 3D imaging research. In this paper, we propose a binocular-vision-based 3D image-quality measurement (IQM) metric. Consideration of the 3D perceptual properties of the primary visual cortex (V1) and the higher visual areas (V2) for 3D-IQM is the major technical contribution to this research. To be more specific, first, the metric simulates the receptive fields of complex cells (V1) using binocular energy response and binocular rivalry response and the higher visual areas (V2) using local binary patterns features. Then, three similarity scores of 3D perceptual properties between the reference and distorted 3D images are measured. Finally, by using support vector regression, three similarity scores are integrated into an overall 3D quality score. Experimental results for two public benchmark databases demonstrate that, in comparison with most current 2D and 3D metrics, the proposed metric achieves significantly higher consistency in alignment with subjective fidelity ratings. PMID:26367842

  12. Hybrid segmentation framework for 3D medical image analysis

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Metaxas, Dimitri N.

    2003-05-01

    Medical image segmentation is the process that defines the region of interest in the image volume. Classical segmentation methods such as region-based methods and boundary-based methods cannot make full use of the information provided by the image. In this paper we proposed a general hybrid framework for 3D medical image segmentation purposes. In our approach we combine the Gibbs Prior model, and the deformable model. First, Gibbs Prior models are applied onto each slice in a 3D medical image volume and the segmentation results are combined to a 3D binary masks of the object. Then we create a deformable mesh based on this 3D binary mask. The deformable model will be lead to the edge features in the volume with the help of image derived external forces. The deformable model segmentation result can be used to update the parameters for Gibbs Prior models. These methods will then work recursively to reach a global segmentation solution. The hybrid segmentation framework has been applied to images with the objective of lung, heart, colon, jaw, tumor, and brain. The experimental data includes MRI (T1, T2, PD), CT, X-ray, Ultra-Sound images. High quality results are achieved with relatively efficient time cost. We also did validation work using expert manual segmentation as the ground truth. The result shows that the hybrid segmentation may have further clinical use.

  13. Research and Teaching: Methods for Creating and Evaluating 3D Tactile Images to Teach STEM Courses to the Visually Impaired

    ERIC Educational Resources Information Center

    Hasper, Eric; Windhorst, Rogier; Hedgpeth, Terri; Van Tuyl, Leanne; Gonzales, Ashleigh; Martinez, Britta; Yu, Hongyu; Farkas, Zolton; Baluch, Debra P.

    2015-01-01

    Project 3D IMAGINE or 3D Image Arrays to Graphically Implement New Education is a pilot study that researches the effectiveness of incorporating 3D tactile images, which are critical for learning science, technology, engineering, and mathematics, into entry-level lab courses. The focus of this project is to increase the participation and…

  14. 2D/3D Image Registration using Regression Learning

    PubMed Central

    Chou, Chen-Rui; Frederick, Brandon; Mageras, Gig; Chang, Sha; Pizer, Stephen

    2013-01-01

    In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object’s 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region’s motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method’s application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof. PMID:24058278

  15. 3-D Terahertz Synthetic-Aperture Imaging and Spectroscopy

    NASA Astrophysics Data System (ADS)

    Henry, Samuel C.

    Terahertz (THz) wavelengths have attracted recent interest in multiple disciplines within engineering and science. Situated between the infrared and the microwave region of the electromagnetic spectrum, THz energy can propagate through non-polar materials such as clothing or packaging layers. Moreover, many chemical compounds, including explosives and many drugs, reveal strong absorption signatures in the THz range. For these reasons, THz wavelengths have great potential for non-destructive evaluation and explosive detection. Three-dimensional (3-D) reflection imaging with considerable depth resolution is also possible using pulsed THz systems. While THz imaging (especially 3-D) systems typically operate in transmission mode, reflection offers the most practical configuration for standoff detection, especially for objects with high water content (like human tissue) which are opaque at THz frequencies. In this research, reflection-based THz synthetic-aperture (SA) imaging is investigated as a potential imaging solution. THz SA imaging results presented in this dissertation are unique in that a 2-D planar synthetic array was used to generate a 3-D image without relying on a narrow time-window for depth isolation cite [Shen 2005]. Novel THz chemical detection techniques are developed and combined with broadband THz SA capabilities to provide concurrent 3-D spectral imaging. All algorithms are tested with various objects and pressed pellets using a pulsed THz time-domain system in the Northwest Electromagnetics and Acoustics Research Laboratory (NEAR-Lab).

  16. Computerized analysis of pelvic incidence from 3D images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaž; Janssen, Michiel M. A.; Pernuš, Franjo; Castelein, René M.; Viergever, Max A.

    2012-02-01

    The sagittal alignment of the pelvis can be evaluated by the angle of pelvic incidence (PI), which is constant for an arbitrary subject position and orientation and can be therefore compared among subjects in standing, sitting or supine position. In this study, PI was measured from three-dimensional (3D) computed tomography (CT) images of normal subjects that were acquired in supine position. A novel computerized method, based on image processing techniques, was developed to automatically determine the anatomical references required to measure PI, i.e. the centers of the femoral heads in 3D, and the center and inclination of the sacral endplate in 3D. Multiplanar image reformation was applied to obtain perfect sagittal views with all anatomical structures completely in line with the hip axis, from which PI was calculated. The resulting PI (mean+/-standard deviation) was equal to 46.6°+/-9.2° for male subjects (N = 189), 47.6°+/-10.7° for female subjects (N = 181), and 47.1°+/-10.0° for all subjects (N = 370). The obtained measurements of PI from 3D images were not biased by acquisition projection or structure orientation, because all anatomical structures were completely in line with the hip axis. The performed measurements in 3D therefore represent PI according to the actual geometrical relationships among anatomical structures of the sacrum, pelvis and hips, as observed from the perfect sagittal views.

  17. Single 3D cell segmentation from optical CT microscope images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Reeves, Anthony P.

    2014-03-01

    The automated segmentation of the nucleus and cytoplasm regions in 3D optical CT microscope images has been achieved with two methods, a global threshold gradient based approach and a graph-cut approach. For the first method, the first two peaks of a gradient figure of merit curve are selected as the thresholds for cytoplasm and nucleus segmentation. The second method applies a graph-cut segmentation twice: the first identifies the nucleus region and the second identifies the cytoplasm region. Image segmentation of single cells is important for automated disease diagnostic systems. The segmentation methods were evaluated with 200 3D images consisting of 40 samples of 5 different cell types. The cell types consisted of columnar, macrophage, metaplastic and squamous human cells and cultured A549 cancer cells. The segmented cells were compared with both 2D and 3D reference images and the quality of segmentation was determined by the Dice Similarity Coefficient (DSC). In general, the graph-cut method had a superior performance to the gradient-based method. The graph-cut method achieved an average DSC of 86% and 72% for nucleus and cytoplasm segmentations respectively for the 2D reference images and 83% and 75% for the 3D reference images. The gradient method achieved an average DSC of 72% and 51% for nucleus and cytoplasm segmentation for the 2D reference images and 71% and 51% for the 3D reference images. The DSC of cytoplasm segmentation was significantly lower than for the nucleus since the cytoplasm was not differentiated as well by image intensity from the background.

  18. Integrated optical 3D digital imaging based on DSP scheme

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

    2008-03-01

    We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

  19. Development of 3D in Vitro Technology for Medical Applications

    PubMed Central

    Ou, Keng-Liang; Hosseinkhani, Hossein

    2014-01-01

    In the past few years, biomaterials technologies together with significant efforts on developing biology have revolutionized the process of engineered materials. Three dimensional (3D) in vitro technology aims to develop set of tools that are simple, inexpensive, portable and robust that could be commercialized and used in various fields of biomedical sciences such as drug discovery, diagnostic tools, and therapeutic approaches in regenerative medicine. The proliferation of cells in the 3D scaffold needs an oxygen and nutrition supply. 3D scaffold materials should provide such an environment for cells living in close proximity. 3D scaffolds that are able to regenerate or restore tissue and/or organs have begun to revolutionize medicine and biomedical science. Scaffolds have been used to support and promote the regeneration of tissues. Different processing techniques have been developed to design and fabricate three dimensional scaffolds for tissue engineering implants. Throughout the chapters we discuss in this review, we inform the reader about the potential applications of different 3D in vitro systems that can be applied for fabricating a wider range of novel biomaterials for use in tissue engineering. PMID:25299693

  20. 3D Medical Collaboration Technology to Enhance Emergency Healthcare

    PubMed Central

    Welch, Greg; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M.; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E.

    2009-01-01

    Two-dimensional (2D) videoconferencing has been explored widely in the past 15–20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals’ viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare. PMID:19521951

  1. Development of 3D in vitro technology for medical applications.

    PubMed

    Ou, Keng-Liang; Hosseinkhani, Hossein

    2014-01-01

    In the past few years, biomaterials technologies together with significant efforts on developing biology have revolutionized the process of engineered materials. Three dimensional (3D) in vitro technology aims to develop set of tools that are simple, inexpensive, portable and robust that could be commercialized and used in various fields of biomedical sciences such as drug discovery, diagnostic tools, and therapeutic approaches in regenerative medicine. The proliferation of cells in the 3D scaffold needs an oxygen and nutrition supply. 3D scaffold materials should provide such an environment for cells living in close proximity. 3D scaffolds that are able to regenerate or restore tissue and/or organs have begun to revolutionize medicine and biomedical science. Scaffolds have been used to support and promote the regeneration of tissues. Different processing techniques have been developed to design and fabricate three dimensional scaffolds for tissue engineering implants. Throughout the chapters we discuss in this review, we inform the reader about the potential applications of different 3D in vitro systems that can be applied for fabricating a wider range of novel biomaterials for use in tissue engineering. PMID:25299693

  2. 3D Printing in Technology and Engineering Education

    ERIC Educational Resources Information Center

    Martin, Robert L.; Bowden, Nicholas S.; Merrill, Chris

    2014-01-01

    In the past five years, there has been tremendous growth in the production and use of desktop 3D printers. This growth has been driven by the increasing availability of inexpensive computing and electronics technologies. The ability to rapidly share ideas and intelligence over the Internet has also played a key role in the growth. Growth is also…

  3. Double depth-enhanced 3D integral imaging in projection-type system without diffuser

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Jiao, Xiao-xue; Sun, Yu; Xie, Yan; Liu, Shao-peng

    2015-05-01

    Integral imaging is a three dimensional (3D) display technology without any additional equipment. A new system is proposed in this paper which consists of the elemental images of real images in real mode (RIRM) and the ones of virtual images in real mode (VIRM). The real images in real mode are the same as the conventional integral images. The virtual images in real mode are obtained by changing the coordinates of the corresponding points in elemental images which can be reconstructed by the lens array in virtual space. In order to reduce the spot size of the reconstructed images, the diffuser in conventional integral imaging is given up in the proposed method. Then the spot size is nearly 1/20 of that in the conventional system. And an optical integral imaging system is constructed to confirm that our proposed method opens a new way for the application of the passive 3D display technology.

  4. Application of 3D printing technology in aerodynamic study

    NASA Astrophysics Data System (ADS)

    Olasek, K.; Wiklak, P.

    2014-08-01

    3D printing, as an additive process, offers much more than traditional machining techniques in terms of achievable complexity of a model shape. That fact was a motivation to adapt discussed technology as a method for creating objects purposed for aerodynamic testing. The following paper provides an overview of various 3D printing techniques. Four models of a standard NACA0018 aerofoil were manufactured in different materials and methods: MultiJet Modelling (MJM), Selective Laser Sintering (SLS) and Fused Deposition Modeling (FDM). Various parameters of the models have been included in the analysis: surface roughness, strength, details quality, surface imperfections and irregularities as well as thermal properties.

  5. 3-D Display Of Magnetic Resonance Imaging Of The Spine

    NASA Astrophysics Data System (ADS)

    Nelson, Alan C.; Kim, Yongmin; Haralick, Robert M.; Anderson, Paul A.; Johnson, Roger H.; DeSoto, Larry A.

    1988-06-01

    The original data is produced through standard magnetic resonance imaging (MRI) procedures with a surface coil applied to the lower back of a normal human subject. The 3-D spine image data consists of twenty-six contiguous slices with 256 x 256 pixels per slice. Two methods for visualization of the 3-D spine are explored. One method utilizes a verifocal mirror system which creates a true 3-D virtual picture of the object. Another method uses a standard high resolution monitor to simultaneously show the three orthogonal sections which intersect at any user-selected point within the object volume. We discuss the application of these systems in assessment of low back pain.

  6. Overview of 3D surface digitization technologies in Europe

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola

    2006-02-01

    This paper presents an overview of the different 3D surface digitization technologies commercially available in the European market. The solutions for 3D surface measurement offered by major European companies can be divided into different groups depending on various characteristics, such as technology (e.g. laser scanning, white light projection), system construction (e.g. fix, on CMM/robot/arm) or measurement type (e.g. surface scanning, profile scanning). Crossing between the categories is possible, however, the majority of commercial products can be divided into the following groups: (a) laser profilers mounted on CMM, (b) portable coded light projection systems, (c) desktop solutions with laser profiler or coded light projectin system and multi-axes platform, (d) laser point measurement systems where both sensor and object move, (e) hand operated laser profilers, hand held laser profiler or point measurement systems, (f) dedicated systems. This paper presents the different 3D surface digitization technologies and describes them with their advantages and disadvantages. Various examples of their use are shown for different application fields. A special interest is given to applications regarding the 3D surface measurement of the human body.

  7. A non-disruptive technology for robust 3D tool tracking for ultrasound-guided interventions.

    PubMed

    Mung, Jay; Vignon, Francois; Jain, Ameet

    2011-01-01

    In the past decade ultrasound (US) has become the preferred modality for a number of interventional procedures, offering excellent soft tissue visualization. The main limitation however is limited visualization of surgical tools. A new method is proposed for robust 3D tracking and US image enhancement of surgical tools under US guidance. Small US sensors are mounted on existing surgical tools. As the imager emits acoustic energy, the electrical signal from the sensor is analyzed to reconstruct its 3D coordinates. These coordinates can then be used for 3D surgical navigation, similar to current day tracking systems. A system with real-time 3D tool tracking and image enhancement was implemented on a commercial ultrasound scanner and 3D probe. Extensive water tank experiments with a tracked 0.2mm sensor show robust performance in a wide range of imaging conditions and tool position/orientations. The 3D tracking accuracy was 0.36 +/- 0.16mm throughout the imaging volume of 55 degrees x 27 degrees x 150mm. Additionally, the tool was successfully tracked inside a beating heart phantom. This paper proposes an image enhancement and tool tracking technology with sub-mm accuracy for US-guided interventions. The technology is non-disruptive, both in terms of existing clinical workflow and commercial considerations, showing promise for large scale clinical impact. PMID:22003612

  8. Reconstruction of 3D scenes from sequences of images

    NASA Astrophysics Data System (ADS)

    Niu, Bei; Sang, Xinzhu; Chen, Duo; Cai, Yuanfa

    2013-08-01

    Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D display. It's a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images. The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences. The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object. Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence. For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud model using the external parameters of camera and the point cloud sequence. The point cloud model is then approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model, which can also be used for 3

  9. Wave-CAIPI for Highly Accelerated 3D Imaging

    PubMed Central

    Bilgic, Berkin; Gagoski, Borjan A.; Cauley, Stephen F.; Fan, Audrey P.; Polimeni, Jonathan R.; Grant, P. Ellen; Wald, Lawrence L.; Setsompop, Kawin

    2014-01-01

    Purpose To introduce the Wave-CAIPI (Controlled Aliasing in Parallel Imaging) acquisition and reconstruction technique for highly accelerated 3D imaging with negligible g-factor and artifact penalties. Methods The Wave-CAIPI 3D acquisition involves playing sinusoidal gy and gz gradients during the readout of each kx encoding line, while modifying the 3D phase encoding strategy to incur inter-slice shifts as in 2D-CAIPI acquisitions. The resulting acquisition spreads the aliasing evenly in all spatial directions, thereby taking full advantage of 3D coil sensitivity distribution. By expressing the voxel spreading effect as a convolution in image space, an efficient reconstruction scheme that does not require data gridding is proposed. Rapid acquisition and high quality image reconstruction with Wave-CAIPI is demonstrated for high-resolution magnitude and phase imaging and Quantitative Susceptibility Mapping (QSM). Results Wave-CAIPI enables full-brain gradient echo (GRE) acquisition at 1 mm isotropic voxel size and R=3×3 acceleration with maximum g-factors of 1.08 at 3T, and 1.05 at 7T. Relative to the other advanced Cartesian encoding strategies 2D-CAIPI and Bunched Phase Encoding, Wave-CAIPI yields up to 2-fold reduction in maximum g-factor for 9-fold acceleration at both field strengths. Conclusion Wave-CAIPI allows highly accelerated 3D acquisitions with low artifact and negligible g-factor penalties, and may facilitate clinical application of high-resolution volumetric imaging. PMID:24986223

  10. Implementation of 3D Optical Scanning Technology for Automotive Applications.

    PubMed

    Kuş, Abdil

    2009-01-01

    Reverse engineering (RE) is a powerful tool for generating a CAD model from the 3D scan data of a physical part that lacks documentation or has changed from the original CAD design of the part. The process of digitizing a part and creating a CAD model from 3D scan data is less time consuming and provides greater accuracy than manually measuring the part and designing the part from scratch in CAD. 3D optical scanning technology is one of the measurement methods which have evolved over the last few years and it is used in a wide range of areas from industrial applications to art and cultural heritage. It is also used extensively in the automotive industry for applications such as part inspections, scanning of tools without CAD definition, scanning the casting for definition of the stock (i.e. the amount of material to be removed from the surface of the castings) model for CAM programs and reverse engineering. In this study two scanning experiments of automotive applications are illustrated. The first one examines the processes from scanning to re-manufacturing the damaged sheet metal cutting die, using a 3D scanning technique and the second study compares the scanned point clouds data to 3D CAD data for inspection purposes. Furthermore, the deviations of the part holes are determined by using different lenses and scanning parameters. PMID:22573995

  11. Implementation of 3D Optical Scanning Technology for Automotive Applications

    PubMed Central

    Kuş, Abdil

    2009-01-01

    Reverse engineering (RE) is a powerful tool for generating a CAD model from the 3D scan data of a physical part that lacks documentation or has changed from the original CAD design of the part. The process of digitizing a part and creating a CAD model from 3D scan data is less time consuming and provides greater accuracy than manually measuring the part and designing the part from scratch in CAD. 3D optical scanning technology is one of the measurement methods which have evolved over the last few years and it is used in a wide range of areas from industrial applications to art and cultural heritage. It is also used extensively in the automotive industry for applications such as part inspections, scanning of tools without CAD definition, scanning the casting for definition of the stock (i.e. the amount of material to be removed from the surface of the castings) model for CAM programs and reverse engineering. In this study two scanning experiments of automotive applications are illustrated. The first one examines the processes from scanning to re-manufacturing the damaged sheet metal cutting die, using a 3D scanning technique and the second study compares the scanned point clouds data to 3D CAD data for inspection purposes. Furthermore, the deviations of the part holes are determined by using different lenses and scanning parameters. PMID:22573995

  12. Possible Applications of 3D Printing Technology on Textile Substrates

    NASA Astrophysics Data System (ADS)

    Korger, M.; Bergschneider, J.; Lutz, M.; Mahltig, B.; Finsterbusch, K.; Rabe, M.

    2016-07-01

    3D printing is a rapidly emerging additive manufacturing technology which can offer cost efficiency and flexibility in product development and production. In textile production 3D printing can also serve as an add-on process to apply 3D structures on textiles. In this study the low-cost fused deposition modeling (FDM) technique was applied using different thermoplastic printing materials available on the market with focus on flexible filaments such as thermoplastic elastomers (TPE) or Soft PLA. Since a good adhesion and stability of the 3D printed structures on textiles are essential, separation force and abrasion resistance tests were conducted with different kinds of printed woven fabrics demonstrating that a sufficient adhesion can be achieved. The main influencing factor can be attributed to the topography of the textile surface affected by the weave, roughness and hairiness offering formlocking connections followed by the wettability of the textile surface by the molten polymer, which depends on the textile surface energy and can be specifically controlled by washing (desizing), finishing or plasma treatment of the textile before the print. These basic adhesion mechanisms can also be considered crucial for 3D printing on knitwear.

  13. Automated curved planar reformation of 3D spine images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo

    2005-10-01

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks.

  14. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  15. 3-D Imaging Systems for Agricultural Applications—A Review

    PubMed Central

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  16. Imaging thin-bed reservoirs with 3-D seismic

    SciTech Connect

    Hardage, B.A.

    1996-12-01

    This article explains how a 3-D seismic data volume, a vertical seismic profile (VSP), electric well logs and reservoir pressure data can be used to image closely stacked thin-bed reservoirs. This interpretation focuses on the Oligocene Frio reservoir in South Texas which has multiple thin-beds spanning a vertical interval of about 3,000 ft.

  17. Practical pseudo-3D registration for large tomographic images

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  18. Optimizing 3D image quality and performance for stereoscopic gaming

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

    2009-02-01

    The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

  19. A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images

    PubMed Central

    Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.

    1986-01-01

    The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16

  20. 3D body scanning technology for fashion and apparel industry

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola

    2007-01-01

    This paper presents an overview of 3D body scanning technologies with applications to the fashion and apparel industry. Complete systems for the digitization of the human body exist since more than fifteen years. One of the main users of this technology with application in the textile field was the military industry. In fact, body scanning technology is being successfully employed since many years in military bases for a fast selection of the correct size of uniforms for the entire staff. Complete solutions were especially developed for this field of application. Many different research projects were issued for the exploitation of the same technology in the commercial field. Experiments were performed and start-up projects are to time running in different parts of the world by installing full body scanning systems in various locations such as shopping malls, boutiques or dedicated scanning centers. Everything is actually ready to be exploited and all the required hardware, software and solutions are available: full body scanning systems, software for the automatic and reliable extraction of body measurements, e-kiosk and web solutions for the presentation of garments, high-end and low-end virtual-try-on systems. However, complete solutions in this area have still not yet found the expected commercial success. Today, with the on-going large cost reduction given by the appearance of new competitors, methods for digitization of the human body becomes more interesting for the fashion and apparel industry. Therefore, a large expansion of these technologies is expected in the near future. To date, different methods are used commercially for the measurement of the human body. These can be divided into three major distinguished groups: laser-scanning, projection of light patterns, combination modeling and image processing. The different solutions have strengths and weaknesses that profile their suitability for specific applications. This paper gives an overview of their

  1. 3-D object-oriented image analysis of geophysical data

    NASA Astrophysics Data System (ADS)

    Fadel, I.; Kerle, N.; van der Meijde, M.

    2014-07-01

    Geophysical data are the main source of information about the subsurface. Geophysical techniques are, however, highly non-unique in determining specific physical parameters and boundaries of subsurface objects. To obtain actual physical information, an inversion process is often applied, in which measurements at or above the Earth surface are inverted into a 2- or 3-D subsurface spatial distribution of the physical property. Interpreting these models into structural objects, related to physical processes, requires a priori knowledge and expert analysis which is susceptible to subjective choices and is therefore often non-repeatable. In this research, we implemented a recently introduced object-based approach to interpret the 3-D inversion results of a single geophysical technique using the available a priori information and the physical and geometrical characteristics of the interpreted objects. The introduced methodology is semi-automatic and repeatable, and allows the extraction of subsurface structures using 3-D object-oriented image analysis (3-D OOA) in an objective knowledge-based classification scheme. The approach allows for a semi-objective setting of thresholds that can be tested and, if necessary, changed in a very fast and efficient way. These changes require only changing the thresholds used in a so-called ruleset, which is composed of algorithms that extract objects from a 3-D data cube. The approach is tested on a synthetic model, which is based on a priori knowledge on objects present in the study area (Tanzania). Object characteristics and thresholds were well defined in a 3-D histogram of velocity versus depth, and objects were fully retrieved. The real model results showed how 3-D OOA can deal with realistic 3-D subsurface conditions in which the boundaries become fuzzy, the object extensions become unclear and the model characteristics vary with depth due to the different physical conditions. As expected, the 3-D histogram of the real data was

  2. Demonstration of Normal and Abnormal Fetal Brains Using 3D Printing from In Utero MR Imaging Data.

    PubMed

    Jarvis, D; Griffiths, P D; Majewski, C

    2016-09-01

    3D printing is a new manufacturing technology that produces high-fidelity models of complex structures from 3D computer-aided design data. Radiology has been particularly quick to embrace the new technology because of the wide access to 3D datasets. Models have been used extensively to assist orthopedic, neurosurgical, and maxillofacial surgical planning. In this report, we describe methods used for 3D printing of the fetal brain by using data from in utero MR imaging. PMID:27079366

  3. Practical applications of 3D sonography in gynecologic imaging.

    PubMed

    Andreotti, Rochelle F; Fleischer, Arthur C

    2014-11-01

    Volume imaging in the pelvis has been well demonstrated to be an extremely useful technique, largely based on its ability to reconstruct the coronal plane of the uterus that usually cannot be visualized using traditional 2-dimensional (2D) imaging. As a result, this technique is now a part of the standard pelvic ultrasound protocol in many institutions. A variety of valuable applications of 3D sonography in the pelvis are discussed in this article. PMID:25444101

  4. 3D Winding Number: Theory and Application to Medical Imaging

    PubMed Central

    Becciu, Alessandro; Fuster, Andrea; Pottek, Mark; van den Heuvel, Bart; ter Haar Romeny, Bart; van Assen, Hans

    2011-01-01

    We develop a new formulation, mathematically elegant, to detect critical points of 3D scalar images. It is based on a topological number, which is the generalization to three dimensions of the 2D winding number. We illustrate our method by considering three different biomedical applications, namely, detection and counting of ovarian follicles and neuronal cells and estimation of cardiac motion from tagged MR images. Qualitative and quantitative evaluation emphasizes the reliability of the results. PMID:21317978

  5. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  6. 3D ultrasound image segmentation using wavelet support vector machines

    PubMed Central

    Akbari, Hamed; Fei, Baowei

    2012-01-01

    Purpose: Transrectal ultrasound (TRUS) imaging is clinically used in prostate biopsy and therapy. Segmentation of the prostate on TRUS images has many applications. In this study, a three-dimensional (3D) segmentation method for TRUS images of the prostate is presented for 3D ultrasound-guided biopsy. Methods: This segmentation method utilizes a statistical shape, texture information, and intensity profiles. A set of wavelet support vector machines (W-SVMs) is applied to the images at various subregions of the prostate. The W-SVMs are trained to adaptively capture the features of the ultrasound images in order to differentiate the prostate and nonprostate tissue. This method consists of a set of wavelet transforms for extraction of prostate texture features and a kernel-based support vector machine to classify the textures. The voxels around the surface of the prostate are labeled in sagittal, coronal, and transverse planes. The weight functions are defined for each labeled voxel on each plane and on the model at each region. In the 3D segmentation procedure, the intensity profiles around the boundary between the tentatively labeled prostate and nonprostate tissue are compared to the prostate model. Consequently, the surfaces are modified based on the model intensity profiles. The segmented prostate is updated and compared to the shape model. These two steps are repeated until they converge. Manual segmentation of the prostate serves as the gold standard and a variety of methods are used to evaluate the performance of the segmentation method. Results: The results from 40 TRUS image volumes of 20 patients show that the Dice overlap ratio is 90.3% ± 2.3% and that the sensitivity is 87.7% ± 4.9%. Conclusions: The proposed method provides a useful tool in our 3D ultrasound image-guided prostate biopsy and can also be applied to other applications in the prostate. PMID:22755682

  7. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  8. Developing novel 3D antennas using advanced additive manufacturing technology

    NASA Astrophysics Data System (ADS)

    Mirzaee, Milad

    In today's world of wireless communication systems, antenna engineering is rapidly advancing as the wireless services continue to expand in support of emerging commercial applications. Antennas play a key role in the performance of advanced transceiver systems where they serve to convert electric power to electromagnetic waves and vice versa. Researchers have held significant interest in developing this crucial component for wireless communication systems by employing a variety of design techniques. In the past few years, demands for electrically small antennas continues to increase, particularly among portable and mobile wireless devices, medical electronics and aerospace systems. This trend toward smaller electronic devices makes the three dimensional (3D) antennas very appealing, since they can be designed in a way to use every available space inside the devise. Additive Manufacturing (AM) method could help to find great solutions for the antennas design for next generation of wireless communication systems. In this thesis, the design and fabrication of 3D printed antennas using AM technology is studied. To demonstrate this application of AM, different types of antennas structures have been designed and fabricated using various manufacturing processes. This thesis studies, for the first time, embedded conductive 3D printed antennas using PolyLactic Acid (PLA) and Acrylonitrile Butadiene Styrene (ABS) for substrate parts and high temperature carbon paste for conductive parts which can be a good candidate to overcome the limitations of direct printing on 3D surfaces that is the most popular method to fabricate conductive parts of the antennas. This thesis also studies, for the first time, the fabrication of antennas with 3D printed conductive parts which can contribute to the new generation of 3D printed antennas.

  9. 3D Printing In Zero-G ISS Technology Demonstration

    NASA Technical Reports Server (NTRS)

    Werkheiser, Niki; Cooper, Kenneth; Edmunson, Jennifer; Dunn, Jason; Snyder, Michael

    2014-01-01

    The National Aeronautics and Space Administration (NASA) has a long term strategy to fabricate components and equipment on-demand for manned missions to the Moon, Mars, and beyond. To support this strategy, NASA and Made in Space, Inc. are developing the 3D Printing In Zero-G payload as a Technology Demonstration for the International Space Station (ISS). The 3D Printing In Zero-G experiment ('3D Print') will be the first machine to perform 3D printing in space. The greater the distance from Earth and the longer the mission duration, the more difficult resupply becomes; this requires a change from the current spares, maintenance, repair, and hardware design model that has been used on the International Space Station (ISS) up until now. Given the extension of the ISS Program, which will inevitably result in replacement parts being required, the ISS is an ideal platform to begin changing the current model for resupply and repair to one that is more suitable for all exploration missions. 3D Printing, more formally known as Additive Manufacturing, is the method of building parts/objects/tools layer-by-layer. The 3D Print experiment will use extrusion-based additive manufacturing, which involves building an object out of plastic deposited by a wire-feed via an extruder head. Parts can be printed from data files loaded on the device at launch, as well as additional files uplinked to the device while on-orbit. The plastic extrusion additive manufacturing process is a low-energy, low-mass solution to many common needs on board the ISS. The 3D Print payload will serve as the ideal first step to proving that process in space. It is unreasonable to expect NASA to launch large blocks of material from which parts or tools can be traditionally machined, and even more unreasonable to fly up multiple drill bits that would be required to machine parts from aerospace-grade materials such as titanium 6-4 alloy and Inconel. The technology to produce parts on demand, in space, offers

  10. 1024 pixels single photon imaging array for 3D ranging

    NASA Astrophysics Data System (ADS)

    Bellisai, S.; Guerrieri, F.; Tisa, S.; Zappa, F.; Tosi, A.; Giudice, A.

    2011-01-01

    Three dimensions (3D) acquisition systems are driving applications in many research field. Nowadays 3D acquiring systems are used in a lot of applications, such as cinema industry or in automotive (for active security systems). Depending on the application, systems present different features, for example color sensitivity, bi-dimensional image resolution, distance measurement accuracy and acquisition frame rate. The system we developed acquires 3D movie using indirect Time of Flight (iTOF), starting from phase delay measurement of a sinusoidally modulated light. The system acquires live movie with a frame rate up to 50frame/s in a range distance between 10 cm up to 7.5 m.

  11. 3-D segmentation of human sternum in lung MDCT images.

    PubMed

    Pazokifard, Banafsheh; Sowmya, Arcot

    2013-01-01

    A fully automatic novel algorithm is presented for accurate 3-D segmentation of the human sternum in lung multi detector computed tomography (MDCT) images. The segmentation result is refined by employing active contours to remove calcified costal cartilage that is attached to the sternum. For each dataset, costal notches (sternocostal joints) are localized in 3-D by using a sternum mask and positions of the costal notches on it as reference. The proposed algorithm for sternum segmentation was tested on 16 complete lung MDCT datasets and comparison of the segmentation results to the reference delineation provided by a radiologist, shows high sensitivity (92.49%) and specificity (99.51%) and small mean distance (dmean=1.07 mm). Total average of the Euclidean distance error for costal notches positioning in 3-D is 4.2 mm. PMID:24110446

  12. Incremental volume reconstruction and rendering for 3-D ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Ohbuchi, Ryutarou; Chen, David; Fuchs, Henry

    1992-09-01

    In this paper, we present approaches toward an interactive visualization of a real time input, applied to 3-D visualizations of 2-D ultrasound echography data. The first, 3 degrees-of- freedom (DOF) incremental system visualizes a 3-D volume acquired as a stream of 2-D slices with location and orientation with 3 DOF. As each slice arrives, the system reconstructs a regular 3-D volume and renders it. Rendering is done by an incremental image-order ray- casting algorithm which stores and reuses the results of expensive resampling along the rays for speed. The second is our first experiment toward real-time 6 DOF acquisition and visualization. Two-dimensional slices with 6 DOF are reconstructed off-line, and visualized at an interactive rate using a parallel volume rendering code running on the graphics multicomputer Pixel-Planes 5.

  13. Automatic needle segmentation in 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Ding, Mingyue; Cardinal, H. Neale; Guan, Weiguang; Fenster, Aaron

    2002-05-01

    In this paper, we propose to use 2D image projections to automatically segment a needle in a 3D ultrasound image. This approach is motivated by the twin observations that the needle is more conspicuous in a projected image, and its projected area is a minimum when the rays are cast parallel to the needle direction. To avoid the computational burden of an exhaustive 2D search for the needle direction, a faster 1D search procedure is proposed. First, a plane which contains the needle direction is determined by the initial projection direction and the (estimated) direction of the needle in the corresponding projection image. Subsequently, an adaptive 1D search technique is used to adjust the projection direction iteratively until the projected needle area is minimized. In order to remove noise and complex background structure from the projection images, a priori information about the needle position and orientation is used to crop the 3D volume, and the cropped volume is rendered with Gaussian transfer functions. We have evaluated this approach experimentally using agar and turkey breast phantoms. The results show that it can find the 3D needle orientation within 1 degree, in about 1 to 3 seconds on a 500 MHz computer.

  14. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    PubMed

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement. PMID:27093439

  15. Demonstration of three gorges archaeological relics based on 3D-visualization technology

    NASA Astrophysics Data System (ADS)

    Xu, Wenli

    2015-12-01

    This paper mainly focuses on the digital demonstration of three gorges archeological relics to exhibit the achievements of the protective measures. A novel and effective method based on 3D-visualization technology, which includes large-scaled landscape reconstruction, virtual studio, and virtual panoramic roaming, etc, is proposed to create a digitized interactive demonstration system. The method contains three stages: pre-processing, 3D modeling and integration. Firstly, abundant archaeological information is classified according to its history and geographical information. Secondly, build up a 3D-model library with the technology of digital images processing and 3D modeling. Thirdly, use virtual reality technology to display the archaeological scenes and cultural relics vividly and realistically. The present work promotes the application of virtual reality to digital projects and enriches the content of digital archaeology.

  16. Automated reconstruction of 3D scenes from sequences of images

    NASA Astrophysics Data System (ADS)

    Pollefeys, M.; Koch, R.; Vergauwen, M.; Van Gool, L.

    Modelling of 3D objects from image sequences is a challenging problem and has been an important research topic in the areas of photogrammetry and computer vision for many years. In this paper, a system is presented which automatically extracts a textured 3D surface model from a sequence of images of a scene. The system can deal with unknown camera settings. In addition, the parameters of this camera are allowed to change during acquisition (e.g., by zooming or focusing). No prior knowledge about the scene is necessary to build the 3D models. Therefore, this system offers a high degree of flexibility. The system is based on state-of-the-art algorithms recently developed in computer vision. The 3D modelling task is decomposed into a number of successive steps. Gradually, more knowledge of the scene and the camera setup is retrieved. At this point, the obtained accuracy is not yet at the level required for most metrology applications, but the visual quality is very convincing. This system has been applied to a number of applications in archaeology. The Roman site of Sagalassos (southwest Turkey) was used as a test case to illustrate the potential of this new approach.

  17. 3D imaging of fetus vertebra by synchrotron radiation microtomography

    NASA Astrophysics Data System (ADS)

    Peyrin, Francoise; Pateyron-Salome, Murielle; Denis, Frederic; Braillon, Pierre; Laval-Jeantet, Anne-Marie; Cloetens, Peter

    1997-10-01

    A synchrotron radiation computed microtomography system allowing high resolution 3D imaging of bone samples has been developed at ESRF. The system uses a high resolution 2D detector based on a CCd camera coupled to a fluorescent screen through light optics. The spatial resolution of the device is particularly well adapted to the imaging of bone structure. In view of studying growth, vertebra samples of fetus with differential gestational ages were imaged. The first results show that fetus vertebra is quite different from adult bone both in terms of density and organization.

  18. Texture blending on 3D models using casual images

    NASA Astrophysics Data System (ADS)

    Liu, Xingming; Liu, Xiaoli; Li, Ameng; Liu, Junyao; Wang, Huijing

    2013-12-01

    In this paper, a method for constructing photorealistic textured model using 3D structured light digitizer is presented. Our method acquisition of range images and texture images around object, and range images are registered and integrated to construct geometric model of object. System is calibrated and poses of texture-camera are determined so that the relationship between texture and geometric model is established. After that, a global optimization is applied to assign compatible texture to adjacent surface and followed with a level procedure to remove artifacts due to vary lighting, approximate geometric model and so on. Lastly, we demonstrate the effect of our method on constructing a real model of world.

  19. Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko

    2008-03-01

    The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).

  20. Quantitative 3-D imaging topogrammetry for telemedicine applications

    NASA Technical Reports Server (NTRS)

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with

  1. Image Appraisal for 2D and 3D Electromagnetic Inversion

    SciTech Connect

    Alumbaugh, D.L.; Newman, G.A.

    1999-01-28

    Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and posterior model covariance matrices can be directly calculated. A method to examine how the horizontal and vertical resolution varies spatially within the electromagnetic property image is developed by examining the columns of the model resolution matrix. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how errors in the inversion process such as data noise and incorrect a priori assumptions about the imaged model map into parameter error. This type of image is shown to be useful in analyzing spatial variations in the image sensitivity to the data. A method is analyzed for statistically estimating the model covariance matrix when the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion). A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on 2D and 3D synthetic cross well EM data sets, as well as a field data set collected at the Lost Hills Oil Field in Central California.

  2. Right main bronchus perforation detected by 3D-image

    PubMed Central

    Bense, László; Eklund, Gunnar; Jorulf, Hakan; Farkas, Árpád; Balásházy, Imre; Hedenstierna, Göran; Krebsz, Ádám; Madas, Balázs Gergely; Strindberg, Jerker Eden

    2011-01-01

    A male metal worker, who has never smoked, contracted debilitating dyspnoea in 2003 which then deteriorated until 2007. Spirometry and chest x-rays provided no diagnosis. A 3D-image of the airways was reconstructed from a high-resolution CT (HRCT) in 2007, showing peribronchial air on the right side, mostly along the presegmental airways. After digital subtraction of the image of the peribronchial air, a hole on the cranial side of the right main bronchus was detected. The perforation could be identified at the re-examination of HRCTs in 2007 and 2009, but not in 2010 when it had possibly healed. The occupational exposure of the patient to evaporating chemicals might have contributed to the perforation and hampered its healing. A 3D HRCT reconstruction should be considered to detect bronchial anomalies, including wall-perforation, when unexplained dyspnoea or other chest symptoms call for extended investigation. PMID:22679238

  3. Ultra-High Resolution 3D Imaging of Whole Cells.

    PubMed

    Huang, Fang; Sirinakis, George; Allgeyer, Edward S; Schroeder, Lena K; Duim, Whitney C; Kromann, Emil B; Phan, Thomy; Rivera-Molina, Felix E; Myers, Jordan R; Irnov, Irnov; Lessard, Mark; Zhang, Yongdeng; Handel, Mary Ann; Jacobs-Wagner, Christine; Lusk, C Patrick; Rothman, James E; Toomre, Derek; Booth, Martin J; Bewersdorf, Joerg

    2016-08-11

    Fluorescence nanoscopy, or super-resolution microscopy, has become an important tool in cell biological research. However, because of its usually inferior resolution in the depth direction (50-80 nm) and rapidly deteriorating resolution in thick samples, its practical biological application has been effectively limited to two dimensions and thin samples. Here, we present the development of whole-cell 4Pi single-molecule switching nanoscopy (W-4PiSMSN), an optical nanoscope that allows imaging of three-dimensional (3D) structures at 10- to 20-nm resolution throughout entire mammalian cells. We demonstrate the wide applicability of W-4PiSMSN across diverse research fields by imaging complex molecular architectures ranging from bacteriophages to nuclear pores, cilia, and synaptonemal complexes in large 3D cellular volumes. PMID:27397506

  4. Small SWAP 3D imaging flash ladar for small tactical unmanned air systems

    NASA Astrophysics Data System (ADS)

    Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.

    2015-05-01

    The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.

  5. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  6. 3D VSP imaging in the Deepwater GOM

    NASA Astrophysics Data System (ADS)

    Hornby, B. E.

    2005-05-01

    Seismic imaging challenges in the Deepwater GOM include surface and sediment related multiples and issues arising from complicated salt bodies. Frequently, wells encounter geologic complexity not resolved on conventional surface seismic section. To help address these challenges BP has been acquiring 3D VSP (Vertical Seismic Profile) surveys in the Deepwater GOM. The procedure involves placing an array of seismic sensors in the borehole and acquiring a 3D seismic dataset with a surface seismic gunboat that fires airguns in a spiral pattern around the wellbore. Placing the seismic geophones in the borehole provides a higher resolution and more accurate image near the borehole, as well as other advantages relating to the unique position of the sensors relative to complex structures. Technical objectives are to complement surface seismic with improved resolution (~2X seismic), better high dip structure definition (e.g. salt flanks) and to fill in "imaging holes" in complex sub-salt plays where surface seismic is blind. Business drivers for this effort are to reduce risk in well placement, improved reserve calculation and understanding compartmentalization and stratigraphic variation. To date, BP has acquired 3D VSP surveys in ten wells in the DW GOM. The initial results are encouraging and show both improved resolution and structural images in complex sub-salt plays where the surface seismic is blind. In conjunction with this effort BP has influenced both contractor borehole seismic tool design and developed methods to enable the 3D VSP surveys to be conducted offline thereby avoiding the high daily rig costs associated with a Deepwater drilling rig.

  7. 3D Printing in Zero-G ISS Technology Demonstration

    NASA Technical Reports Server (NTRS)

    Johnston, Mallory M.; Werkheiser, Mary J.; Cooper, Kenneth G.; Snyder, Michael P.; Edmunson, Jennifer E.

    2014-01-01

    The National Aeronautics and Space Administration (NASA) has a long term strategy to fabricate components and equipment on-demand for manned missions to the Moon, Mars, and beyond. To support this strategy, NASA and Made in Space, Inc. are developing the 3D Printing In Zero-G payload as a Technology Demonstration for the International Space Station. The 3D Printing In Zero-G experiment will be the first machine to perform 3D printing in space. The greater the distance from Earth and the longer the mission duration, the more difficult resupply becomes; this requires a change from the current spares, maintenance, repair, and hardware design model that has been used on the International Space Station up until now. Given the extension of the ISS Program, which will inevitably result in replacement parts being required, the ISS is an ideal platform to begin changing the current model for resupply and repair to one that is more suitable for all exploration missions. 3D Printing, more formally known as Additive Manufacturing, is the method of building parts/ objects/tools layer-by-layer. The 3D Print experiment will use extrusion-based additive manufacturing, which involves building an object out of plastic deposited by a wire-feed via an extruder head. Parts can be printed from data files loaded on the device at launch, as well as additional files uplinked to the device while on-orbit. The plastic extrusion additive manufacturing process is a low-energy, low-mass solution to many common needs on board the ISS. The 3D Print payload will serve as the ideal first step to proving that process in space. It is unreasonable to expect NASA to launch large blocks of material from which parts or tools can be traditionally machined, and even more unreasonable to fly up specialized manufacturing hardware to perform the entire range of function traditionally machining requires. The technology to produce parts on demand, in space, offers unique design options that are not possible

  8. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  9. 3D tongue motion from tagged and cine MR images.

    PubMed

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z; Lee, Junghoon; Stone, Maureen; Prince, Jerry L

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach suffers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information to yield improved estimation of 3D tongue motion. The method uses the harmonic phase (HARP) algorithm to extract motion from tags and diffeomorphic demons to provide surface deformation. It then uses an incompressible deformation estimation algorithm to incorporate both sources of displacement information to form an estimate of the 3D whole tongue motion. Experimental results show that use of combined information improves motion estimation near the tongue surface, a problem that has previously been reported as problematic in HARP analysis, while preserving accurate internal motion estimates. Results on both normal and abnormal tongue motions are shown. PMID:24505742

  10. Discrete Method of Images for 3D Radio Propagation Modeling

    NASA Astrophysics Data System (ADS)

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  11. Radiometric Quality Evaluation of INSAT-3D Imager Data

    NASA Astrophysics Data System (ADS)

    Prakash, S.; Jindal, D.; Badal, N.; Kartikeyan, B.; Gopala Krishna, B.

    2014-11-01

    INSAT-3D is an advanced meteorological satellite of ISRO which acquires imagery in optical and infra-red (IR) channels for study of weather dynamics in Indian sub-continent region. In this paper, methodology of radiometric quality evaluation for Level-1 products of Imager, one of the payloads onboard INSAT-3D, is described. Firstly, overall visual quality of scene in terms of dynamic range, edge sharpness or modulation transfer function (MTF), presence of striping and other image artefacts is computed. Uniform targets in Desert and Sea region are identified for which detailed radiometric performance evaluation for IR channels is carried out. Mean brightness temperature (BT) of targets is computed and validated with independently generated radiometric references. Further, diurnal/seasonal trends in target BT values and radiometric uncertainty or sensor noise are studied. Results of radiometric quality evaluation over duration of eight months (January to August 2014) and comparison of radiometric consistency pre/post yaw flip of satellite are presented. Radiometric Analysis indicates that INSAT-3D images have high contrast (MTF > 0.2) and low striping effects. A bias of <4K is observed in the brightness temperature values of TIR-1 channel measured during January-August 2014 indicating consistent radiometric calibration. Diurnal and seasonal analysis shows that Noise equivalent differential temperature (NEdT) for IR channels is consistent and well within specifications.

  12. Automated Identification of Fiducial Points on 3D Torso Images

    PubMed Central

    Kawale, Manas M; Reece, Gregory P; Crosby, Melissa A; Beahm, Elisabeth K; Fingeret, Michelle C; Markey, Mia K; Merchant, Fatima A

    2013-01-01

    Breast reconstruction is an important part of the breast cancer treatment process for many women. Recently, 2D and 3D images have been used by plastic surgeons for evaluating surgical outcomes. Distances between different fiducial points are frequently used as quantitative measures for characterizing breast morphology. Fiducial points can be directly marked on subjects for direct anthropometry, or can be manually marked on images. This paper introduces novel algorithms to automate the identification of fiducial points in 3D images. Automating the process will make measurements of breast morphology more reliable, reducing the inter- and intra-observer bias. Algorithms to identify three fiducial points, the nipples, sternal notch, and umbilicus, are described. The algorithms used for localization of these fiducial points are formulated using a combination of surface curvature and 2D color information. Comparison of the 3D co-ordinates of automatically detected fiducial points and those identified manually, and geodesic distances between the fiducial points are used to validate algorithm performance. The algorithms reliably identified the location of all three of the fiducial points. We dedicate this article to our late colleague and friend, Dr. Elisabeth K. Beahm. Elisabeth was both a talented plastic surgeon and physician-scientist; we deeply miss her insight and her fellowship. PMID:25288903

  13. Fast 3D fluid registration of brain magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Leporé, Natasha; Chou, Yi-Yu; Lopez, Oscar L.; Aizenstein, Howard J.; Becker, James T.; Toga, Arthur W.; Thompson, Paul M.

    2008-03-01

    Fluid registration is widely used in medical imaging to track anatomical changes, to correct image distortions, and to integrate multi-modality data. Fluid mappings guarantee that the template image deforms smoothly into the target, without tearing or folding, even when large deformations are required for accurate matching. Here we implemented an intensity-based fluid registration algorithm, accelerated by using a filter designed by Bro-Nielsen and Gramkow. We validated the algorithm on 2D and 3D geometric phantoms using the mean square difference between the final registered image and target as a measure of the accuracy of the registration. In tests on phantom images with different levels of overlap, varying amounts of Gaussian noise, and different intensity gradients, the fluid method outperformed a more commonly used elastic registration method, both in terms of accuracy and in avoiding topological errors during deformation. We also studied the effect of varying the viscosity coefficients in the viscous fluid equation, to optimize registration accuracy. Finally, we applied the fluid registration algorithm to a dataset of 2D binary corpus callosum images and 3D volumetric brain MRIs from 14 healthy individuals to assess its accuracy and robustness.

  14. Femoroacetabular impingement with chronic acetabular rim fracture - 3D computed tomography, 3D magnetic resonance imaging and arthroscopic correlation

    PubMed Central

    Chhabra, Avneesh; Nordeck, Shaun; Wadhwa, Vibhor; Madhavapeddi, Sai; Robertson, William J

    2015-01-01

    Femoroacetabular impingement is uncommonly associated with a large rim fragment of bone along the superolateral acetabulum. We report an unusual case of femoroacetabular impingement (FAI) with chronic acetabular rim fracture. Radiographic, 3D computed tomography, 3D magnetic resonance imaging and arthroscopy correlation is presented with discussion of relative advantages and disadvantages of various modalities in the context of FAI. PMID:26191497

  15. Stereotactic mammography imaging combined with 3D US imaging for image guided breast biopsy

    SciTech Connect

    Surry, K. J. M.; Mills, G. R.; Bevan, K.; Downey, D. B.; Fenster, A.

    2007-11-15

    Stereotactic X-ray mammography (SM) and ultrasound (US) guidance are both commonly used for breast biopsy. While SM provides three-dimensional (3D) targeting information and US provides real-time guidance, both have limitations. SM is a long and uncomfortable procedure and the US guided procedure is inherently two dimensional (2D), requiring a skilled physician for both safety and accuracy. The authors developed a 3D US-guided biopsy system to be integrated with, and to supplement SM imaging. Their goal is to be able to biopsy a larger percentage of suspicious masses using US, by clarifying ambiguous structures with SM imaging. Features from SM and US guided biopsy were combined, including breast stabilization, a confined needle trajectory, and dual modality imaging. The 3D US guided biopsy system uses a 7.5 MHz breast probe and is mounted on an upright SM machine for preprocedural imaging. Intraprocedural targeting and guidance was achieved with real-time 2D and near real-time 3D US imaging. Postbiopsy 3D US imaging allowed for confirmation that the needle was penetrating the target. The authors evaluated 3D US-guided biopsy accuracy of their system using test phantoms. To use mammographic imaging information, they registered the SM and 3D US coordinate systems. The 3D positions of targets identified in the SM images were determined with a target localization error (TLE) of 0.49 mm. The z component (x-ray tube to image) of the TLE dominated with a TLE{sub z} of 0.47 mm. The SM system was then registered to 3D US, with a fiducial registration error (FRE) and target registration error (TRE) of 0.82 and 0.92 mm, respectively. Analysis of the FRE and TRE components showed that these errors were dominated by inaccuracies in the z component with a FRE{sub z} of 0.76 mm and a TRE{sub z} of 0.85 mm. A stereotactic mammography and 3D US guided breast biopsy system should include breast compression for stability and safety and dual modality imaging for target localization

  16. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    PubMed

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image

  17. Pavement cracking measurements using 3D laser-scan images

    NASA Astrophysics Data System (ADS)

    Ouyang, W.; Xu, B.

    2013-10-01

    Pavement condition surveying is vital for pavement maintenance programs that ensure ride quality and traffic safety. This paper first introduces an automated pavement inspection system which uses a three-dimensional (3D) camera and a structured laser light to acquire dense transverse profiles of a pavement lane surface when it carries a moving vehicle. After the calibration, the 3D system can yield a depth resolution of 0.5 mm and a transverse resolution of 1.56 mm pixel-1 at 1.4 m camera height from the ground. The scanning rate of the camera can be set to its maximum at 5000 lines s-1, allowing the density of scanned profiles to vary with the vehicle's speed. The paper then illustrates the algorithms that utilize 3D information to detect pavement distress, such as transverse, longitudinal and alligator cracking, and presents the field tests on the system's repeatability when scanning a sample pavement in multiple runs at the same vehicle speed, at different vehicle speeds and under different weather conditions. The results show that this dedicated 3D system can capture accurate pavement images that detail surface distress, and obtain consistent crack measurements in repeated tests and under different driving and lighting conditions.

  18. Study on Construction of 3d Building Based on Uav Images

    NASA Astrophysics Data System (ADS)

    Xie, F.; Lin, Z.; Gui, D.; Lin, H.

    2012-07-01

    Based on the characteristics of Unmanned Aerial Vehicle (UAV) system for low altitude aerial photogrammetry and the need of three dimensional (3D)city modeling, a method of fast 3D building modeling using the images from UAV carrying four-combined camera is studied. Firstly, by contrasting and analyzing the mosaic structures of the existing four-combined cameras, a new type of four-combined camera with special design of overlap images is designed, which improves the self-calibration function to achieve the high precision imaging by automatically eliminating the error of machinery deformation and the time lag with every exposure, and further reduce the weight of the imaging system. Secondly, several-angle images including vertical images and oblique images gotten by the UAV system are used for the detail measure of building surfaces and the texture extraction. Finally, two tests that are aerial photography with large scale mapping of 1:1000 and 3D building construction in Shandong University of Science and Technology and aerial photography with large scale mapping of 1:500 and 3D building construction in Henan University of Urban Construction, provide authentication model for construction of 3D building based on combined wide-angle camera images from UAV system. It is demonstrated that the UAV system for low altitude aerial photogrammetry can be used in the construction of 3D building production, and the technology solution in this paper offers a new, fast and technical plan for the 3D expression of the city landscape, fine modeling and visualization.

  19. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2010-11-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  20. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2009-09-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  1. Virtual image display as a backlight for 3D.

    PubMed

    Travis, Adrian; MacCrann, Niall; Emerton, Neil; Kollin, Joel; Georgiou, Andreas; Lanier, Jaron; Bathiche, Stephen

    2013-07-29

    We describe a device which has the potential to be used both as a virtual image display and as a backlight. The pupil of the emitted light fills the device approximately to its periphery and the collimated emission can be scanned both horizontally and vertically in the manner needed to illuminate an eye in any position. The aim is to reduce the power needed to illuminate a liquid crystal panel but also to enable a smooth transition from 3D to a virtual image as the user nears the screen. PMID:23938645

  2. 3D imaging of soil pore network: two different approaches

    NASA Astrophysics Data System (ADS)

    Matrecano, M.; Di Matteo, B.; Mele, G.; Terribile, F.

    2009-04-01

    Pore geometry imaging and its quantitative description is a key factor for advances in the knowledge of physical, chemical and biological soil processes. For many years photos from flattened surfaces of undisturbed soil samples impregnated with fluorescent resin and from soil thin sections under microscope have been the only way available for exploring pore architecture at different scales. Earlier 3D representations of the internal structure of the soil based on not destructive methods have been obtained using medical tomographic systems (NMR and X-ray CT). However, images provided using such equipments, show strong limitations in terms of spatial resolution. In the last decade very good results have then been obtained using imaging from very expensive systems based on synchrotron radiation. More recently, X-ray Micro-Tomography has resulted the most widely applied being the technique showing the best compromise between costs, resolution and size of the images. Conversely, the conceptually simpler but destructive method of "serial sectioning" has been progressively neglected for technical problems in sample preparation and time consumption needed to obtain an adequate number of serial sections for correct 3D reconstruction of soil pore geometry. In this work a comparison between the two methods above has been carried out in order to define advantages, shortcomings and to point out their different potential. A cylindrical undisturbed soil sample 6.5cm in diameter and 6.5cm height of an Ap horizon of an alluvial soil showing vertic characteristics, has been reconstructed using both a desktop X-ray micro-tomograph Skyscan 1172 and the new automatic serial sectioning system SSAT (Sequential Section Automatic Tomography) set up at CNR ISAFOM in Ercolano (Italy) with the aim to overcome most of the typical limitations of such a technique. Image best resolution of 7.5 µm per voxel resulted using X-ray Micro CT while 20 µm was the best value using the serial sectioning

  3. 3D pulmonary airway color image reconstruction via shape from shading and virtual bronchoscopy imaging techniques

    NASA Astrophysics Data System (ADS)

    Suter, Melissa; Reinhardt, Joseph M.; Hoffman, Eric A.; McLennan, Geoffrey

    2005-04-01

    The dependence on macro-optical imaging of the human body in the assessment of possible disease is rapidly increasing concurrent with, and as a direct result of, advancements made in medical imaging technologies. Assessing the pulmonary airways through bronchoscopy is performed extensively in clinical practice however remains highly subjective due to limited visualization techniques and the lack of quantitative analyses. The representation of 3D structures in 2D visualization modes, although providing an insight to the structural content of the scene, may in fact skew the perception of the structural form. We have developed two methods for visualizing the optically derived airway mucosal features whilst preserving the structural scene integrity. Shape from shading (SFS) techniques can be used to extract 3D structural information from 2D optical images. The SFS technique presented addresses many limitations previously encountered in conventional techniques resulting in high-resolution 3D color images. The second method presented to combine both color and structural information relies on combined CT and bronchoscopy imaging modalities. External imaging techniques such as CT provide a means of determining the gross structural anatomy of the pulmonary airways, however lack the important optically derived mucosal color. Virtual bronchoscopy is used to provide a direct link between the CT derived structural anatomy and the macro-optically derived mucosal color. Through utilization of a virtual and true bronchoscopy matching technique we are able to directly extract combined structurally sound 3D color segments of the pulmonary airways. Various pulmonary airway diseases are assessed and the resulting combined color and texture results are presented demonstrating the effectiveness of the presented techniques.

  4. Automatic structural matching of 3D image data

    NASA Astrophysics Data System (ADS)

    Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor

    2015-10-01

    A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.

  5. Mesh generation from 3D multi-material images.

    PubMed

    Boltcheva, Dobrina; Yvinec, Mariette; Boissonnat, Jean-Daniel

    2009-01-01

    The problem of generating realistic computer models of objects represented by 3D segmented images is important in many biomedical applications. Labelled 3D images impose particular challenges for meshing algorithms because multi-material junctions form features such as surface pacthes, edges and corners which need to be preserved into the output mesh. In this paper, we propose a feature preserving Delaunay refinement algorithm which can be used to generate high-quality tetrahedral meshes from segmented images. The idea is to explicitly sample corners and edges from the input image and to constrain the Delaunay refinement algorithm to preserve these features in addition to the surface patches. Our experimental results on segmented medical images have shown that, within a few seconds, the algorithm outputs a tetrahedral mesh in which each material is represented as a consistent submesh without gaps and overlaps. The optimization property of the Delaunay triangulation makes these meshes suitable for the purpose of realistic visualization or finite element simulations. PMID:20426123

  6. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    NASA Astrophysics Data System (ADS)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  7. Towards magnetic 3D x-ray imaging

    NASA Astrophysics Data System (ADS)

    Fischer, Peter; Streubel, R.; Im, M.-Y.; Parkinson, D.; Hong, J.-I.; Schmidt, O. G.; Makarov, D.

    2014-03-01

    Mesoscale phenomena in magnetism will add essential parameters to improve speed, size and energy efficiency of spin driven devices. Multidimensional visualization techniques will be crucial to achieve mesoscience goals. Magnetic tomography is of large interest to understand e.g. interfaces in magnetic multilayers, the inner structure of magnetic nanocrystals, nanowires or the functionality of artificial 3D magnetic nanostructures. We have developed tomographic capabilities with magnetic full-field soft X-ray microscopy combining X-MCD as element specific magnetic contrast mechanism, high spatial and temporal resolution due to the Fresnel zone plate optics. At beamline 6.1.2 at the ALS (Berkeley CA) a new rotation stage allows recording an angular series (up to 360 deg) of high precision 2D projection images. Applying state-of-the-art reconstruction algorithms it is possible to retrieve the full 3D structure. We will present results on prototypic rolled-up Ni and Co/Pt tubes and glass capillaries coated with magnetic films and compare to other 3D imaging approaches e.g. in electron microscopy. Supported by BES MSD DOE Contract No. DE-AC02-05-CH11231 and ERC under the EU FP7 program (grant agreement No. 306277).

  8. Feature detection on 3D images of dental imprints

    NASA Astrophysics Data System (ADS)

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  9. Flatbed-type 3D display systems using integral imaging method

    NASA Astrophysics Data System (ADS)

    Hirayama, Yuzo; Nagatani, Hiroyuki; Saishu, Tatsuo; Fukushima, Rieko; Taira, Kazuki

    2006-10-01

    We have developed prototypes of flatbed-type autostereoscopic display systems using one-dimensional integral imaging method. The integral imaging system reproduces light beams similar of those produced by a real object. Our display architecture is suitable for flatbed configurations because it has a large margin for viewing distance and angle and has continuous motion parallax. We have applied our technology to 15.4-inch displays. We realized horizontal resolution of 480 with 12 parallaxes due to adoption of mosaic pixel arrangement of the display panel. It allows viewers to see high quality autostereoscopic images. Viewing the display from angle allows the viewer to experience 3-D images that stand out several centimeters from the surface of the display. Mixed reality of virtual 3-D objects and real objects are also realized on a flatbed display. In seeking reproduction of natural 3-D images on the flatbed display, we developed proprietary software. The fast playback of the CG movie contents and real-time interaction are realized with the aid of a graphics card. Realization of the safety 3-D images to the human beings is very important. Therefore, we have measured the effects on the visual function and evaluated the biological effects. For example, the accommodation and convergence were measured at the same time. The various biological effects are also measured before and after the task of watching 3-D images. We have found that our displays show better results than those to a conventional stereoscopic display. The new technology opens up new areas of application for 3-D displays, including arcade games, e-learning, simulations of buildings and landscapes, and even 3-D menus in restaurants.

  10. Performance prediction for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Rubel, Oleksii; Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2015-10-01

    Performance of denoising based on discrete cosine transform applied to multichannel remote sensing images corrupted by additive white Gaussian noise is analyzed. Images obtained by satellite Earth Observing-1 (EO-1) mission using hyperspectral imager instrument (Hyperion) that have high input SNR are taken as test images. Denoising performance is characterized by improvement of PSNR. For hard-thresholding 3D DCT-based denoising, simple statistics (probabilities to be less than a certain threshold) are used to predict denoising efficiency using curves fitted into scatterplots. It is shown that the obtained curves (approximations) provide prediction of denoising efficiency with high accuracy. Analysis is carried out for different numbers of channels processed jointly. Universality of prediction for different number of channels is proven.

  11. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  12. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  13. Scattering robust 3D reconstruction via polarized transient imaging.

    PubMed

    Wu, Rihui; Suo, Jinli; Dai, Feng; Zhang, Yongdong; Dai, Qionghai

    2016-09-01

    Reconstructing 3D structure of scenes in the scattering medium is a challenging task with great research value. Existing techniques often impose strong assumptions on the scattering behaviors and are of limited performance. Recently, a low-cost transient imaging system has provided a feasible way to resolve the scene depth, by detecting the reflection instant on the time profile of a surface point. However, in cases with scattering medium, the rays are both reflected and scattered during transmission, and the depth calculated from the time profile largely deviates from the true value. To handle this problem, we used the different polarization behaviors of the reflection and scattering components, and introduced active polarization to separate the reflection component to estimate the scattering robust depth. Our experiments have demonstrated that our approach can accurately reconstruct the 3D structure underlying the scattering medium. PMID:27607944

  14. The 3D model control of image processing

    NASA Technical Reports Server (NTRS)

    Nguyen, An H.; Stark, Lawrence

    1989-01-01

    Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator.

  15. 3D range scan enhancement using image-based methods

    NASA Astrophysics Data System (ADS)

    Herbort, Steffen; Gerken, Britta; Schugk, Daniel; Wöhler, Christian

    2013-10-01

    This paper addresses the problem of 3D surface scan refinement, which is desirable due to noise, outliers, and missing measurements being present in the 3D surfaces obtained with a laser scanner. We present a novel algorithm for the fusion of absolute laser scanner depth profiles and photometrically estimated surface normal data, which yields a noise-reduced and highly detailed depth profile with large scale shape robustness. In contrast to other approaches published in the literature, the presented algorithm (1) regards non-Lambertian surfaces, (2) simultaneously computes surface reflectance (i.e. BRDF) parameters required for 3D reconstruction, (3) models pixelwise incident light and viewing directions, and (4) accounts for interreflections. The algorithm as such relies on the minimization of a three-component error term, which penalizes intensity deviations, integrability deviations, and deviations from the known large-scale surface shape. The solution of the error minimization is obtained iteratively based on a calculus of variations. BRDF parameters are estimated by initially reducing and then iteratively refining the optical resolution, which provides the required robust data basis. The 3D reconstruction of concave surface regions affected by interreflections is improved by compensating global illumination in the image data. The algorithm is evaluated based on eight objects with varying albedos and reflectance behaviors (diffuse, specular, metallic). The qualitative evaluation shows a removal of outliers and a strong reduction of noise, while the large scale shape is preserved. Fine surface details Which are previously not contained in the surface scans, are incorporated through using image data. The algorithm is evaluated with respect to its absolute accuracy using two caliper objects of known shape, and based on synthetically generated data. The beneficial effect of interreflection compensation on the reconstruction accuracy is evaluated quantitatively in a

  16. Calibration of an intensity ratio system for 3D imaging

    NASA Astrophysics Data System (ADS)

    Tsui, H. T.; Tang, K. C.

    1989-03-01

    An intensity ratio method for 3D imaging is proposed with error analysis given for assessment and future improvements. The method is cheap and reasonably fast as it requires no mechanical scanning or laborious correspondence computation. One drawback of the intensity ratio methods which hamper their widespread use is the undesirable change of image intensity. This is usually caused by the difference in reflection from different parts of an object surface and the automatic iris or gain control of the camera. In our method, gray-level patterns used include an uniform pattern, a staircase pattern and a sawtooth pattern to make the system more robust against errors in intensity ratio. 3D information of the surface points of an object can be derived from the intensity ratios of the images by triangulation. A reference back plane is put behind the object to monitor the change in image intensity. Errors due to camera calibration, projector calibration, variations in intensity, imperfection of the slides etc. are analyzed. Early experiments of the system using a newvicon CCTV camera with back plane intensity correction gives a mean-square range error of about 0.5 percent. Extensive analysis of various errors is expected to yield methods for improving the accuracy.

  17. 3D seismic imaging on massively parallel computers

    SciTech Connect

    Womble, D.E.; Ober, C.C.; Oldfield, R.

    1997-02-01

    The ability to image complex geologies such as salt domes in the Gulf of Mexico and thrusts in mountainous regions is a key to reducing the risk and cost associated with oil and gas exploration. Imaging these structures, however, is computationally expensive. Datasets can be terabytes in size, and the processing time required for the multiple iterations needed to produce a velocity model can take months, even with the massively parallel computers available today. Some algorithms, such as 3D, finite-difference, prestack, depth migration remain beyond the capacity of production seismic processing. Massively parallel processors (MPPs) and algorithms research are the tools that will enable this project to provide new seismic processing capabilities to the oil and gas industry. The goals of this work are to (1) develop finite-difference algorithms for 3D, prestack, depth migration; (2) develop efficient computational approaches for seismic imaging and for processing terabyte datasets on massively parallel computers; and (3) develop a modular, portable, seismic imaging code.

  18. Depth-controlled 3D TV image coding

    NASA Astrophysics Data System (ADS)

    Chiari, Armando; Ciciani, Bruno; Romero, Milton; Rossi, Ricardo

    1998-04-01

    Conventional 3D-TV codecs processing one down-compatible (either left, or right) channel may optionally include the extraction of the disparity field associated with the stereo-pairs to support the coding of the complementary channel. A two-fold improvement over such approaches is proposed in this paper by exploiting 3D features retained in the stereo-pairs to reduce the redundancies in both channels, and according to their visual sensitiveness. Through an a-priori disparity field analysis, our coding scheme separates a region of interest from the foreground/background in the volume space reproduced in order to code them selectively based on their visual relevance. Such a region of interest is here identified as the one which is focused by the shooting device. By suitably scaling the DCT coefficient n such a way that precision is reduced for the image blocks lying on less relevant areas, our approach aims at reducing the signal energy in the background/foreground patterns, while retaining finer details on the more relevant image portions. From an implementation point of view, it is worth noticing that the system proposed keeps its surplus processing power on the encoder side only. Simulation results show such improvements as a better image quality for a given transmission bit rate, or a graceful quality degradation of the reconstructed images with decreasing data-rates.

  19. Ice shelf melt rates and 3D imaging

    NASA Astrophysics Data System (ADS)

    Lewis, Cameron Scott

    Ice shelves are sensitive indicators of climate change and play a critical role in the stability of ice sheets and oceanic currents. Basal melting of ice shelves plays an important role in both the mass balance of the ice sheet and the global climate system. Airborne- and satellite based remote sensing systems can perform thickness measurements of ice shelves. Time separated repeat flight tracks over ice shelves of interest generate data sets that can be used to derive basal melt rates using traditional glaciological techniques. Many previous melt rate studies have relied on surface elevation data gathered by airborne- and satellite based altimeters. These systems infer melt rates by assuming hydrostatic equilibrium, an assumption that may not be accurate, especially near an ice shelf's grounding line. Moderate bandwidth, VHF, ice penetrating radar has been used to measure ice shelf profiles with relatively coarse resolution. This study presents the application of an ultra wide bandwidth (UWB), UHF, ice penetrating radar to obtain finer resolution data on the ice shelves. These data reveal significant details about the basal interface, including the locations and depth of bottom crevasses and deviations from hydrostatic equilibrium. While our single channel radar provides new insight into ice shelf structure, it only images a small swatch of the shelf, which is assumed to be an average of the total shelf behavior. This study takes an additional step by investigating the application of a 3D imaging technique to a data set collected using a ground based multi channel version of the UWB radar. The intent is to show that the UWB radar could be capable of providing a wider swath 3D image of an ice shelf. The 3D images can then be used to obtain a more complete estimate of the bottom melt rates of ice shelves.

  20. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  1. 3D imaging with a linear light source

    NASA Astrophysics Data System (ADS)

    Lunazzi, José J.; Rivera, Noemí I. R.

    2008-04-01

    In a previous system we showed how the three-dimensionality of an object can be projected and preserved on a diffractive screen, which is just a simple diffractive holographic lens. A transmission object is illuminated with an extended filament of a white light lamp and no additional element is necessary. The system forms three-dimensional (3D) images with normal depth (orthoscopic) of the shadow type. The continuous parallax, perfect sharpness and additional characteristics of the image depend on the width and extension of the luminous filament and the properties of the diffractive lens. This new imaging system is shown to inspire an interesting extension to non-perfect reflective or refractive imaging elements because the sharpness of the image depends only on the width of the source. As new light sources are being developed that may result in very thin linear white light sources, for example, light emitting diodes, it may be useful to further develop this technique. We describe an imaging process in which a rough Fresnel metallic mirror can give a sharp image of an object due to the reduced width of a long filament lamp. We will discuss how the process could be extended to Fresnel lenses or to any aberrating imaging element.

  2. Dynamic visual image modeling for 3D synthetic scenes in agricultural engineering

    NASA Astrophysics Data System (ADS)

    Gao, Li; Yan, Juntao; Li, Xiaobo; Ji, Yatai; Li, Xin

    The dynamic visual image modeling for 3D synthetic scenes by using dynamic multichannel binocular visual image based on the mobile self-organizing network. Technologies of 3D modeling synthetic scenes have been widely used in kinds of industries. The main purpose of this paper is to use multiple networks of dynamic visual monitors and sensors to observe an unattended area, to use the advantages of mobile network in rural areas for improving existing mobile network information service further and providing personalized information services. The goal of displaying is to provide perfect representation of synthetic scenes. Using low-power dynamic visual monitors and temperature/humidity sensor or GPS installed in the node equipment, monitoring data will be sent at scheduled time. Then through the mobile self-organizing network, 3D model is rebuilt by synthesizing the returned images. On this basis, we formalize a novel algorithm for multichannel binocular visual 3D images based on fast 3D modeling. Taking advantage of these low prices mobile, mobile self-organizing networks can get a large number of video from where is not suitable for human observation or unable to reach, and accurately synthetic 3D scene. This application will play a great role in promoting its application in agriculture.

  3. Density-tapered spiral arrays for ultrasound 3-D imaging.

    PubMed

    Ramalli, Alessandro; Boni, Enrico; Savoia, Alessandro Stuart; Tortoli, Piero

    2015-08-01

    The current high interest in 3-D ultrasound imaging is pushing the development of 2-D probes with a challenging number of active elements. The most popular approach to limit this number is the sparse array technique, which designs the array layout by means of complex optimization algorithms. These algorithms are typically constrained by a few steering conditions, and, as such, cannot guarantee uniform side-lobe performance at all angles. The performance may be improved by the ungridded extensions of the sparse array technique, but this result is achieved at the expense of a further complication of the optimization process. In this paper, a method to design the layout of large circular arrays with a limited number of elements according to Fermat's spiral seeds and spatial density modulation is proposed and shown to be suitable for application to 3-D ultrasound imaging. This deterministic, aperiodic, and balanced positioning procedure attempts to guarantee uniform performance over a wide range of steering angles. The capabilities of the method are demonstrated by simulating and comparing the performance of spiral and dense arrays. A good trade-off for small vessel imaging is found, e.g., in the 60λ spiral array with 1.0λ elements and Blackman density tapering window. Here, the grating lobe level is -16 dB, the lateral resolution is lower than 6λ the depth of field is 120λ and, the average contrast is 10.3 dB, while the sensitivity remains in a 5 dB range for a wide selection of steering angles. The simulation results may represent a reference guide to the design of spiral sparse array probes for different application fields. PMID:26285181

  4. Ultra-realistic 3-D imaging based on colour holography

    NASA Astrophysics Data System (ADS)

    Bjelkhagen, H. I.

    2013-02-01

    A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-D images. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-D images with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.

  5. 3D imaging reconstruction and impacted third molars: case reports

    PubMed Central

    Tuzi, Andrea; Di Bari, Roberto; Cicconetti, Andrea

    2012-01-01

    Summary There is a debate in the literature about the need for Computed Tomagraphy (CT) before removing third molars, even if positive radiographic signs are present. In few cases, the third molar is so close to the inferior alveolar nerve that its extraction might expose patients to the risk of post-operative neuro-sensitive alterations of the skin and the mucosa of the homolateral lower lip and chin. Thus, the injury of the inferior alveolar nerve may represent a serious, though infrequent, neurologic complication in the surgery of the third molars rendering necessary a careful pre-operative evaluation of their anatomical relationship with the inferior alveolar nerve by means of radiographic imaging techniques. This contribution presents two case reports showing positive radiographic signs, which are the hallmarks of a possible close relationship between the inferior alveolar nerve and the third molars. We aim at better defining the relationship between third molars and the mandibular canal using Dental CT Scan, DICOM image acquisition and 3D reconstruction with a dedicated software. By our study we deduce that 3D images are not indispensable, but they can provide a very agreeable assistance in the most complicated cases. PMID:23386934

  6. 3D imaging of neutron tracks using confocal microscopy

    NASA Astrophysics Data System (ADS)

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  7. 3D Multispectral Light Propagation Model For Subcutaneous Veins Imaging

    SciTech Connect

    Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William

    2008-01-01

    In this paper, we describe a new 3D light propagation model aimed at understanding the effects of various physiological properties on subcutaneous vein imaging. In particular, we build upon the well known MCML (Monte Carlo Multi Layer) code and present a tissue model that improves upon the current state-of-the-art by: incorporating physiological variation, such as melanin concentration, fat content, and layer thickness; including veins of varying depth and diameter; using curved surfaces from real arm shapes; and modeling the vessel wall interface. We describe our model, present results from the Monte Carlo modeling, and compare these results with those obtained with other Monte Carlo methods.

  8. A new gold-standard dataset for 2D/3D image registration evaluation

    NASA Astrophysics Data System (ADS)

    Pawiro, Supriyanto; Markelj, Primoz; Gendrin, Christelle; Figl, Michael; Stock, Markus; Bloch, Christoph; Weber, Christoph; Unger, Ewald; Nöbauer, Iris; Kainberger, Franz; Bergmeister, Helga; Georg, Dietmar; Bergmann, Helmar; Birkfellner, Wolfgang

    2010-02-01

    In this paper, we propose a new gold standard data set for the validation of 2D/3D image registration algorithms for image guided radiotherapy. A gold standard data set was calculated using a pig head with attached fiducial markers. We used several imaging modalities common in diagnostic imaging or radiotherapy which include 64-slice computed tomography (CT), magnetic resonance imaging (MRI) using T1, T2 and proton density (PD) sequences, and cone beam CT (CBCT) imaging data. Radiographic data were acquired using kilovoltage (kV) and megavoltage (MV) imaging techniques. The image information reflects both anatomy and reliable fiducial marker information, and improves over existing data sets by the level of anatomical detail and image data quality. The markers of three dimensional (3D) and two dimensional (2D) images were segmented using Analyze 9.0 (AnalyzeDirect, Inc) and an in-house software. The projection distance errors (PDE) and the expected target registration errors (TRE) over all the image data sets were found to be less than 1.7 mm and 1.3 mm, respectively. The gold standard data set, obtained with state-of-the-art imaging technology, has the potential to improve the validation of 2D/3D registration algorithms for image guided therapy.

  9. Restoring Fort Frontenac in 3D: Effective Usage of 3D Technology for Heritage Visualization

    NASA Astrophysics Data System (ADS)

    Yabe, M.; Goins, E.; Jackson, C.; Halbstein, D.; Foster, S.; Bazely, S.

    2015-02-01

    This paper is composed of three elements: 3D modeling, web design, and heritage visualization. The aim is to use computer graphics design to inform and create an interest in historical visualization by rebuilding Fort Frontenac using 3D modeling and interactive design. The final model will be integr ated into an interactive website to learn more about the fort's historic imp ortance. It is apparent that using computer graphics can save time and money when it comes to historical visualization. Visitors do not have to travel to the actual archaeological buildings. They can simply use the Web in their own home to learn about this information virtually. Meticulously following historical records to create a sophisticated restoration of archaeological buildings will draw viewers into visualizations, such as the historical world of Fort Frontenac. As a result, it allows the viewers to effectively understand the fort's social sy stem, habits, and historical events.

  10. 3D photoacoustic imaging of a moving target

    NASA Astrophysics Data System (ADS)

    Ephrat, Pinhas; Roumeliotis, Michael; Prato, Frank S.; Carson, Jeffrey J. L.

    2009-02-01

    We have developed a fast 3D photoacoustic imaging system based on a sparse array of ultrasound detectors and iterative image reconstruction. To investigate the high frame rate capabilities of our system in the context of rotational motion, flow, and spectroscopy, we performed high frame-rate imaging on a series of targets, including a rotating graphite rod, a bolus of methylene blue flowing through a tube, and hyper-spectral imaging of a tube filled with methylene blue under a no flow condition. Our frame-rate for image acquisition was 10 Hz, which was limited by the laser repetition rate. We were able to track the rotation of the rod and accurately estimate its rotational velocity, at a rate of 0.33 rotations-per-second. The flow of contrast in the tube, at a flow rate of 180 μL/min, was also well depicted, and quantitative analysis suggested a potential method for estimating flow velocity from such measurements. The spectrum obtained did not provide accurate results, but depicted the spectral absorption signature of methylene blue , which may be sufficient for identification purposes. These preliminary results suggest that our high frame-rate photoacoustic imaging system could be used for identifying contrast agents and monitoring kinetics as an agent propagates through specific, simple structures such as blood vessels.

  11. 3D imaging of the early embryonic chicken heart with focused ion beam scanning electron microscopy

    PubMed Central

    Rennie, Monique Y.; Gahan, Curran G.; López, Claudia S.; Thornburg, Kent L.; Rugonyi, Sandra

    2015-01-01

    Early embryonic heart development is a period of dynamic growth and remodeling, with rapid changes occurring at the tissue, cell, and subcellular levels. A detailed understanding of the events that establish the components of the heart wall has been hampered by a lack of methodologies for three dimensional (3D), high-resolution imaging. Focused ion beam-scanning electron microscopy (FIB-SEM) is a novel technology for imaging 3D tissue volumes at the subcellular level. FIB-SEM alternates between imaging the block face with a scanning electron beam and milling away thin sections of tissue with a focused ion beam, allowing for collection and analysis of 3D data. FIB-SEM was used to image the three layers of the day 4 chicken embryo heart: myocardium, cardiac jelly, and endocardium. Individual images obtained with FIB-SEM were comparable in quality and resolution to those obtained with transmission electron microscopy (TEM). Up to 1100 serial images were obtained in 4 nm increments at 4.88 nm resolution, and image stacks were aligned to create volumes 800–1500 μm3 in size. Segmentation of organelles revealed their organization and distinct volume fractions between cardiac wall layers. We conclude that FIB-SEM is a powerful modality for 3D subcellular imaging of the embryonic heart wall. PMID:24742339

  12. 3-D Imaging and Simulation for Nephron Sparing Surgical Training.

    PubMed

    Ahmadi, Hamed; Liu, Jen-Jane

    2016-08-01

    Minimally invasive partial nephrectomy (MIPN) is now considered the procedure of choice for small renal masses largely based on functional advantages over traditional open surgery. Lack of haptic feedback, the need for spatial understanding of tumor borders, and advanced operative techniques to minimize ischemia time or achieve zero-ischemia PN are among factors that make MIPN a technically demanding operation with a steep learning curve for inexperienced surgeons. Surgical simulation has emerged as a useful training adjunct in residency programs to facilitate the acquisition of these complex operative skills in the setting of restricted work hours and limited operating room time and autonomy. However, the majority of available surgical simulators focus on basic surgical skills, and procedure-specific simulation is needed for optimal surgical training. Advances in 3-dimensional (3-D) imaging have also enhanced the surgeon's ability to localize tumors intraoperatively. This article focuses on recent procedure-specific simulation models for laparoscopic and robotic-assisted PN and advanced 3-D imaging techniques as part of pre- and some cases, intraoperative surgical planning. PMID:27314271

  13. 3D Reconstruction of virtual colon structures from colonoscopy images.

    PubMed

    Hong, DongHo; Tavanapong, Wallapak; Wong, Johnny; Oh, JungHwan; de Groen, Piet C

    2014-01-01

    This paper presents the first fully automated reconstruction technique of 3D virtual colon segments from individual colonoscopy images. It is the basis of new software applications that may offer great benefits for improving quality of care for colonoscopy patients. For example, a 3D map of the areas inspected and uninspected during colonoscopy can be shown on request of the endoscopist during the procedure. The endoscopist may revisit the suggested uninspected areas to reduce the chance of missing polyps that reside in these areas. The percentage of the colon surface seen by the endoscopist can be used as a coarse objective indicator of the quality of the procedure. The derived virtual colon models can be stored for post-procedure training of new endoscopists to teach navigation techniques that result in a higher level of procedure quality. Our technique does not require a prior CT scan of the colon or any global positioning device. Our experiments on endoscopy images of an Olympus synthetic colon model reveal encouraging results with small average reconstruction errors (4.1 mm for the fold depths and 12.1 mm for the fold circumferences). PMID:24225230

  14. Computing 3D head orientation from a monocular image sequence

    NASA Astrophysics Data System (ADS)

    Horprasert, Thanarat; Yacoob, Yaser; Davis, Larry S.

    1997-02-01

    An approach for estimating 3D head orientation in a monocular image sequence is proposed. The approach employs recently developed image-based parameterized tracking for face and face features to locate the area in which a sub- pixel parameterized shape estimation of the eye's boundary is performed. This involves tracking of five points (four at the eye corners and the fifth is the tip of the nose). We describe an approach that relies on the coarse structure of the face to compute orientation relative to the camera plane. Our approach employs projective invariance of the cross-ratios of the eye corners and anthropometric statistics to estimate the head yaw, roll and pitch. Analytical and experimental results are reported.

  15. 3D electrical tomographic imaging using vertical arrays of electrodes

    NASA Astrophysics Data System (ADS)

    Murphy, S. C.; Stanley, S. J.; Rhodes, D.; York, T. A.

    2006-11-01

    Linear arrays of electrodes in conjunction with electrical impedance tomography have been used to spatially interrogate industrial processes that have only limited access for sensor placement. This paper explores the compromises that are to be expected when using a small number of vertically positioned linear arrays to facilitate 3D imaging using electrical tomography. A configuration with three arrays is found to give reasonable results when compared with a 'conventional' arrangement of circumferential electrodes. A single array yields highly localized sensitivity that struggles to image the whole space. Strategies have been tested on a small-scale version of a sludge settling application that is of relevance to the industrial sponsor. A new electrode excitation strategy, referred to here as 'planar cross drive', is found to give superior results to an extended version of the adjacent electrodes technique due to the improved uniformity of the sensitivity across the domain. Recommendations are suggested for parameters to inform the scale-up to industrial vessels.

  16. Mono- and multistatic polarimetric sparse aperture 3D SAR imaging

    NASA Astrophysics Data System (ADS)

    DeGraaf, Stuart; Twigg, Charles; Phillips, Louis

    2008-04-01

    SAR imaging at low center frequencies (UHF and L-band) offers advantages over imaging at more conventional (X-band) frequencies, including foliage penetration for target detection and scene segmentation based on polarimetric coherency. However, bandwidths typically available at these center frequencies are small, affording poor resolution. By exploiting extreme spatial diversity (partial hemispheric k-space coverage) and nonlinear bandwidth extrapolation/interpolation methods such as Least-Squares SuperResolution (LSSR) and Least-Squares CLEAN (LSCLEAN), one can achieve resolutions that are commensurate with the carrier frequency (λ/4) rather than the bandwidth (c/2B). Furthermore, extreme angle diversity affords complete coverage of a target's backscatter, and a correspondingly more literal image. To realize these benefits, however, one must image the scene in 3-D; otherwise layover-induced misregistration compromises the coherent summation that yields improved resolution. Practically, one is limited to very sparse elevation apertures, i.e. a small number of circular passes. Here we demonstrate that both LSSR and LSCLEAN can reduce considerably the sidelobe and alias artifacts caused by these sparse elevation apertures. Further, we illustrate how a hypothetical multi-static geometry consisting of six vertical real-aperture receive apertures, combined with a single circular transmit aperture provide effective, though sparse and unusual, 3-D k-space support. Forward scattering captured by this geometry reveals horizontal scattering surfaces that are missed in monostatic backscattering geometries. This paper illustrates results based on LucernHammer UHF and L-band mono- and multi-static simulations of a backhoe.

  17. Novel interactive virtual showcase based on 3D multitouch technology

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Liu, Yue; Lu, You; Wang, Yongtian

    2009-11-01

    A new interactive virtual showcase is proposed in this paper. With the help of virtual reality technology, the user of the proposed system can watch the virtual objects floating in the air from all four sides and interact with the virtual objects by touching the four surfaces of the virtual showcase. Unlike traditional multitouch system, this system cannot only realize multi-touch on a plane to implement 2D translation, 2D scaling, and 2D rotation of the objects; it can also realize the 3D interaction of the virtual objects by recognizing and analyzing the multi-touch that can be simultaneously captured from the four planes. Experimental results show the potential of the proposed system to be applied in the exhibition of historical relics and other precious goods.

  18. Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display.

    PubMed

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-05-18

    We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer's eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer's eyes enables pixel data control software to turn on only pixels for view images near the viewer's eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking. More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart. PMID:26074575

  19. High Resolution 3D Radar Imaging of Comet Interiors

    NASA Astrophysics Data System (ADS)

    Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

    2012-12-01

    Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D

  20. Distributed Network, Wireless and Cloud Computing Enabled 3-D Ultrasound; a New Medical Technology Paradigm

    PubMed Central

    Meir, Arie; Rubinsky, Boris

    2009-01-01

    Medical technologies are indispensable to modern medicine. However, they have become exceedingly expensive and complex and are not available to the economically disadvantaged majority of the world population in underdeveloped as well as developed parts of the world. For example, according to the World Health Organization about two thirds of the world population does not have access to medical imaging. In this paper we introduce a new medical technology paradigm centered on wireless technology and cloud computing that was designed to overcome the problems of increasing health technology costs. We demonstrate the value of the concept with an example; the design of a wireless, distributed network and central (cloud) computing enabled three-dimensional (3-D) ultrasound system. Specifically, we demonstrate the feasibility of producing a 3-D high end ultrasound scan at a central computing facility using the raw data acquired at the remote patient site with an inexpensive low end ultrasound transducer designed for 2-D, through a mobile device and wireless connection link between them. Producing high-end 3D ultrasound images with simple low-end transducers reduces the cost of imaging by orders of magnitude. It also removes the requirement of having a highly trained imaging expert at the patient site, since the need for hand-eye coordination and the ability to reconstruct a 3-D mental image from 2-D scans, which is a necessity for high quality ultrasound imaging, is eliminated. This could enable relatively untrained medical workers in developing nations to administer imaging and a more accurate diagnosis, effectively saving the lives of people. PMID:19936236

  1. Distributed network, wireless and cloud computing enabled 3-D ultrasound; a new medical technology paradigm.

    PubMed

    Meir, Arie; Rubinsky, Boris

    2009-01-01

    Medical technologies are indispensable to modern medicine. However, they have become exceedingly expensive and complex and are not available to the economically disadvantaged majority of the world population in underdeveloped as well as developed parts of the world. For example, according to the World Health Organization about two thirds of the world population does not have access to medical imaging. In this paper we introduce a new medical technology paradigm centered on wireless technology and cloud computing that was designed to overcome the problems of increasing health technology costs. We demonstrate the value of the concept with an example; the design of a wireless, distributed network and central (cloud) computing enabled three-dimensional (3-D) ultrasound system. Specifically, we demonstrate the feasibility of producing a 3-D high end ultrasound scan at a central computing facility using the raw data acquired at the remote patient site with an inexpensive low end ultrasound transducer designed for 2-D, through a mobile device and wireless connection link between them. Producing high-end 3D ultrasound images with simple low-end transducers reduces the cost of imaging by orders of magnitude. It also removes the requirement of having a highly trained imaging expert at the patient site, since the need for hand-eye coordination and the ability to reconstruct a 3-D mental image from 2-D scans, which is a necessity for high quality ultrasound imaging, is eliminated. This could enable relatively untrained medical workers in developing nations to administer imaging and a more accurate diagnosis, effectively saving the lives of people. PMID:19936236

  2. Object Segmentation and Ground Truth in 3D Embryonic Imaging

    PubMed Central

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  3. 3D Technology Selection for a Virtual Learning Environment by Blending ISO 9126 Standard and AHP

    ERIC Educational Resources Information Center

    Cetin, Aydin; Guler, Inan

    2011-01-01

    Web3D presents many opportunities for learners in a virtual world or virtual environment over the web. This is a great opportunity for open-distance education institutions to benefit from web3d technologies to create courses with interactive 3d materials. There are many open source and commercial products offering 3d technologies over the web…

  4. From pixel to voxel: a deeper view of biological tissue by 3D mass spectral imaging

    PubMed Central

    Ye, Hui; Greer, Tyler; Li, Lingjun

    2011-01-01

    Three dimensional mass spectral imaging (3D MSI) is an exciting field that grants the ability to study a broad mass range of molecular species ranging from small molecules to large proteins by creating lateral and vertical distribution maps of select compounds. Although the general premise behind 3D MSI is simple, factors such as choice of ionization method, sample handling, software considerations and many others must be taken into account for the successful design of a 3D MSI experiment. This review provides a brief overview of ionization methods, sample preparation, software types and technological advancements driving 3D MSI research of a wide range of low- to high-mass analytes. Future perspectives in this field are also provided to conclude that the positive and promises ever-growing applications in the biomedical field with continuous developments of this powerful analytical tool. PMID:21320052

  5. 3D scanning and 3D printing as innovative technologies for fabricating personalized topical drug delivery systems.

    PubMed

    Goyanes, Alvaro; Det-Amornrat, Usanee; Wang, Jie; Basit, Abdul W; Gaisford, Simon

    2016-07-28

    Acne is a multifactorial inflammatory skin disease with high prevalence. In this work, the potential of 3D printing to produce flexible personalised-shape anti-acne drug (salicylic acid) loaded devices was demonstrated by two different 3D printing (3DP) technologies: Fused Deposition Modelling (FDM) and stereolithography (SLA). 3D scanning technology was used to obtain a 3D model of a nose adapted to the morphology of an individual. In FDM 3DP, commercially produced Flex EcoPLA™ (FPLA) and polycaprolactone (PCL) filaments were loaded with salicylic acid by hot melt extrusion (HME) (theoretical drug loading - 2% w/w) and used as feedstock material for 3D printing. Drug loading in the FPLA-salicylic acid and PCL-salicylic acid 3D printed patches was 0.4% w/w and 1.2% w/w respectively, indicating significant thermal degradation of drug during HME and 3D printing. Diffusion testing in Franz cells using a synthetic membrane revealed that the drug loaded printed samples released <187μg/cm(2) within 3h. FPLA-salicylic acid filament was successfully printed as a nose-shape mask by FDM 3DP, but the PCL-salicylic acid filament was not. In the SLA printing process, the drug was dissolved in different mixtures of poly(ethylene glycol) diacrylate (PEGDA) and poly(ethylene glycol) (PEG) that were solidified by the action of a laser beam. SLA printing led to 3D printed devices (nose-shape) with higher resolution and higher drug loading (1.9% w/w) than FDM, with no drug degradation. The results of drug diffusion tests revealed that drug diffusion was faster than with the FDM devices, 229 and 291μg/cm(2) within 3h for the two formulations evaluated. In this study, SLA printing was the more appropriate 3D printing technology to manufacture anti-acne devices with salicylic acid. The combination of 3D scanning and 3D printing has the potential to offer solutions to produce personalised drug loaded devices, adapted in shape and size to individual patients. PMID:27189134

  6. Combining volumetric edge display and multiview display for expression of natural 3D images

    NASA Astrophysics Data System (ADS)

    Yasui, Ryota; Matsuda, Isamu; Kakeya, Hideki

    2006-02-01

    In the present paper the authors present a novel stereoscopic display method combining volumetric edge display technology and multiview display technology to realize presentation of natural 3D images where the viewers do not suffer from contradiction between binocular convergence and focal accommodation of the eyes, which causes eyestrain and sickness. We adopt volumetric display method only for edge drawing, while we adopt stereoscopic approach for flat areas of the image. Since focal accommodation of our eyes is affected only by the edge part of the image, natural focal accommodation can be induced if the edges of the 3D image are drawn on the proper depth. The conventional stereo-matching technique can give us robust depth values of the pixels which constitute noticeable edges. Also occlusion and gloss of the objects can be roughly expressed with the proposed method since we use stereoscopic approach for the flat area. We can attain a system where many users can view natural 3D objects at the consistent position and posture at the same time in this system. A simple optometric experiment using a refractometer suggests that the proposed method can give us 3-D images without contradiction between binocular convergence and focal accommodation.

  7. High-resolution 3D imaging laser radar flight test experiments

    NASA Astrophysics Data System (ADS)

    Marino, Richard M.; Davis, W. R.; Rich, G. C.; McLaughlin, J. L.; Lee, E. I.; Stanley, B. M.; Burnside, J. W.; Rowe, G. S.; Hatch, R. E.; Square, T. E.; Skelly, L. J.; O'Brien, M.; Vasile, A.; Heinrichs, R. M.

    2005-05-01

    Situation awareness and accurate Target Identification (TID) are critical requirements for successful battle management. Ground vehicles can be detected, tracked, and in some cases imaged using airborne or space-borne microwave radar. Obscurants such as camouflage net and/or tree canopy foliage can degrade the performance of such radars. Foliage can be penetrated with long wavelength microwave radar, but generally at the expense of imaging resolution. The goals of the DARPA Jigsaw program include the development and demonstration of high-resolution 3-D imaging laser radar (ladar) ensor technology and systems that can be used from airborne platforms to image and identify military ground vehicles that may be hiding under camouflage or foliage such as tree canopy. With DARPA support, MIT Lincoln Laboratory has developed a rugged and compact 3-D imaging ladar system that has successfully demonstrated the feasibility and utility of this application. The sensor system has been integrated into a UH-1 helicopter for winter and summer flight campaigns. The sensor operates day or night and produces high-resolution 3-D spatial images using short laser pulses and a focal plane array of Geiger-mode avalanche photo-diode (APD) detectors with independent digital time-of-flight counting circuits at each pixel. The sensor technology includes Lincoln Laboratory developments of the microchip laser and novel focal plane arrays. The microchip laser is a passively Q-switched solid-state frequency-doubled Nd:YAG laser transmitting short laser pulses (300 ps FWHM) at 16 kilohertz pulse rate and at 532 nm wavelength. The single photon detection efficiency has been measured to be > 20 % using these 32x32 Silicon Geiger-mode APDs at room temperature. The APD saturates while providing a gain of typically > 106. The pulse out of the detector is used to stop a 500 MHz digital clock register integrated within the focal-plane array at each pixel. Using the detector in this binary response mode

  8. Vector Acoustics, Vector Sensors, and 3D Underwater Imaging

    NASA Astrophysics Data System (ADS)

    Lindwall, D.

    2007-12-01

    Vector acoustic data has two more dimensions of information than pressure data and may allow for 3D underwater imaging with much less data than with hydrophone data. The vector acoustic sensors measures the particle motions due to passing sound waves and, in conjunction with a collocated hydrophone, the direction of travel of the sound waves. When using a controlled source with known source and sensor locations, the reflection points of the sound field can be determined with a simple trigonometric calculation. I demonstrate this concept with an experiment that used an accelerometer based vector acoustic sensor in a water tank with a short-pulse source and passive scattering targets. The sensor consists of a three-axis accelerometer and a matched hydrophone. The sound source was a standard transducer driven by a short 7 kHz pulse. The sensor was suspended in a fixed location and the hydrophone was moved about the tank by a robotic arm to insonify the tank from many locations. Several floats were placed in the tank as acoustic targets at diagonal ranges of approximately one meter. The accelerometer data show the direct source wave as well as the target scattered waves and reflections from the nearby water surface, tank bottom and sides. Without resorting to the usual methods of seismic imaging, which in this case is only two dimensional and relied entirely on the use of a synthetic source aperture, the two targets, the tank walls, the tank bottom, and the water surface were imaged. A directional ambiguity inherent to vector sensors is removed by using collocated hydrophone data. Although this experiment was in a very simple environment, it suggests that 3-D seismic surveys may be achieved with vector sensors using the same logistics as a 2-D survey that uses conventional hydrophones. This work was supported by the Office of Naval Research, program element 61153N.

  9. Biodynamic Doppler imaging of subcellular motion inside 3D living tissue culture and biopsies (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Nolte, David D.

    2016-03-01

    Biodynamic imaging is an emerging 3D optical imaging technology that probes up to 1 mm deep inside three-dimensional living tissue using short-coherence dynamic light scattering to measure the intracellular motions of cells inside their natural microenvironments. Biodynamic imaging is label-free and non-invasive. The information content of biodynamic imaging is captured through tissue dynamics spectroscopy that displays the changes in the Doppler signatures from intracellular constituents in response to applied compounds. The affected dynamic intracellular mechanisms include organelle transport, membrane undulations, cytoskeletal restructuring, strain at cellular adhesions, cytokinesis, mitosis, exo- and endo-cytosis among others. The development of 3D high-content assays such as biodynamic profiling can become a critical new tool for assessing efficacy of drugs and the suitability of specific types of tissue growth for drug discovery and development. The use of biodynamic profiling to predict clinical outcome of living biopsies to cancer therapeutics can be developed into a phenotypic companion diagnostic, as well as a new tool for therapy selection in personalized medicine. This invited talk will present an overview of the optical, physical and physiological processes involved in biodynamic imaging. Several different biodynamic imaging modalities include motility contrast imaging (MCI), tissue-dynamics spectroscopy (TDS) and tissue-dynamics imaging (TDI). A wide range of potential applications will be described that include process monitoring for 3D tissue culture, drug discovery and development, cancer therapy selection, embryo assessment for in-vitro fertilization and artificial reproductive technologies, among others.

  10. Research of Fast 3D Imaging Based on Multiple Mode

    NASA Astrophysics Data System (ADS)

    Chen, Shibing; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda; Wang, Yu

    2016-02-01

    Three-dimensional (3D) imaging has received increasingly extensive attention and has been widely used currently. Lots of efforts have been put on three-dimensional imaging method and system study, in order to meet fast and high accurate requirement. In this article, we realize a fast and high quality stereo matching algorithm on field programmable gate array (FPGA) using the combination of time-of-flight (TOF) camera and binocular camera. Images captured from the two cameras own a same spatial resolution, letting us use the depth maps taken by the TOF camera to figure initial disparity. Under the constraint of the depth map as the stereo pairs when comes to stereo matching, expected disparity of each pixel is limited within a narrow search range. In the meanwhile, using field programmable gate array (FPGA, altera cyclone IV series) concurrent computing we can configure multi core image matching system, thus doing stereo matching on embedded system. The simulation results demonstrate that it can speed up the process of stereo matching and increase matching reliability and stability, realize embedded calculation, expand application range.

  11. Brain surface maps from 3-D medical images

    NASA Astrophysics Data System (ADS)

    Lu, Jiuhuai; Hansen, Eric W.; Gazzaniga, Michael S.

    1991-06-01

    The anatomic and functional localization of brain lesions for neurologic diagnosis and brain surgery is facilitated by labeling the cortical surface in 3D images. This paper presents a method which extracts cortical contours from magnetic resonance (MR) image series and then produces a planar surface map which preserves important anatomic features. The resultant map may be used for manual anatomic localization as well as for further automatic labeling. Outer contours are determined on MR cross-sectional images by following the clear boundaries between gray matter and cerebral-spinal fluid, skipping over sulci. Carrying this contour below the surface by shrinking it along its normal produces an inner contour that alternately intercepts gray matter (sulci) and white matter along its length. This procedure is applied to every section in the set, and the image (grayscale) values along the inner contours are radially projected and interpolated onto a semi-cylindrical surface with axis normal to the slices and large enough to cover the whole brain. A planar map of the cortical surface results by flattening this cylindrical surface. The projection from inner contour to cylindrical surface is unique in the sense that different points on the inner contour correspond to different points on the cylindrical surface. As the outer contours are readily obtained by automatic segmentation, cortical maps can be made directly from an MR series.

  12. Image appraisal for 2D and 3D electromagnetic inversion

    SciTech Connect

    Alumbaugh, D.L.; Newman, G.A.

    1998-04-01

    Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and model covariance matrices can be directly calculated. The columns of the model resolution matrix are shown to yield empirical estimates of the horizontal and vertical resolution throughout the imaging region. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how the estimated data noise maps into parameter error. When the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion), an iterative method can be applied to statistically estimate the model covariance matrix, as well as a regularization covariance matrix. The latter estimates the error in the inverted results caused by small variations in the regularization parameter. A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on a synthetic cross well EM data set.

  13. 3D geometric analysis of the aorta in 3D MRA follow-up pediatric image data

    NASA Astrophysics Data System (ADS)

    Wörz, Stefan; Alrajab, Abdulsattar; Arnold, Raoul; Eichhorn, Joachim; von Tengg-Kobligk, Hendrik; Schenk, Jens-Peter; Rohr, Karl

    2014-03-01

    We introduce a new model-based approach for the segmentation of the thoracic aorta and its main branches from follow-up pediatric 3D MRA image data. For robust segmentation of vessels even in difficult cases (e.g., neighboring structures), we propose a new extended parametric cylinder model which requires only relatively few model parameters. The new model is used in conjunction with a two-step fitting scheme for refining the segmentation result yielding an accurate segmentation of the vascular shape. Moreover, we include a novel adaptive background masking scheme and we describe a spatial normalization scheme to align the segmentation results from follow-up examinations. We have evaluated our proposed approach using different 3D synthetic images and we have successfully applied the approach to follow-up pediatric 3D MRA image data.

  14. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    SciTech Connect

    Wang, J.; Karunakaran, C.; Lu, Y.; Hormes, J.; Hitchcock, A. P.; Prange, A.; Franz, B.; Harkness, T.; Obst, M.

    2011-09-09

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  15. 3D surface scan of biological samples with a Push-broom Imaging Spectrometer

    NASA Astrophysics Data System (ADS)

    Yao, Haibo; Kincaid, Russell; Hruska, Zuzana; Brown, Robert L.; Bhatnagar, Deepak; Cleveland, Thomas E.

    2013-08-01

    The food industry is always on the lookout for sensing technologies for rapid and nondestructive inspection of food products. Hyperspectral imaging technology integrates both imaging and spectroscopy into unique imaging sensors. Its application for food safety and quality inspection has made significant progress in recent years. Specifically, hyperspectral imaging has shown its potential for surface contamination detection in many food related applications. Most existing hyperspectral imaging systems use pushbroom scanning which is generally used for flat surface inspection. In some applications it is desirable to be able to acquire hyperspectral images on circular objects such as corn ears, apples, and cucumbers. Past research describes inspection systems that examine all surfaces of individual objects. Most of these systems did not employ hyperspectral imaging. These systems typically utilized a roller to rotate an object, such as an apple. During apple rotation, the camera took multiple images in order to cover the complete surface of the apple. The acquired image data lacked the spectral component present in a hyperspectral image. This paper discusses the development of a hyperspectral imaging system for a 3-D surface scan of biological samples. The new instrument is based on a pushbroom hyperspectral line scanner using a rotational stage to turn the sample. The system is suitable for whole surface hyperspectral imaging of circular objects. In addition to its value to the food industry, the system could be useful for other applications involving 3-D surface inspection.

  16. 3D and multispectral imaging for subcutaneous veins detection.

    PubMed

    Paquit, Vincent C; Tobin, Kenneth W; Price, Jeffery R; Mèriaudeau, Fabrice

    2009-07-01

    The first and perhaps most important phase of a surgical procedure is the insertion of an intravenous (IV) catheter. Currently, this is performed manually by trained personnel. In some visions of future operating rooms, however, this process is to be replaced by an automated system. Experiments to determine the best NIR wavelengths to optimize vein contrast for physiological differences such as skin tone and/or the presence of hair on the arm or wrist surface are presented. For illumination our system is composed of a mercury arc lamp coupled to a 10nm band-pass spectrometer. A structured lighting system is also coupled to our multispectral system in order to provide 3D information of the patient arm orientation. Images of each patient arm are captured under every possible combinations of illuminants and the optimal combination of wavelengths for a given subject to maximize vein contrast using linear discriminant analysis is determined. PMID:19582050

  17. An Efficient 3D Imaging using Structured Light Systems

    NASA Astrophysics Data System (ADS)

    Lee, Deokwoo

    Structured light 3D surface imaging has been crucial in the fields of image processing and computer vision, particularly in reconstruction, recognition and others. In this dissertation, we propose the approaches to development of an efficient 3D surface imaging system using structured light patterns including reconstruction, recognition and sampling criterion. To achieve an efficient reconstruction system, we address the problem in its many dimensions. In the first, we extract geometric 3D coordinates of an object which is illuminated by a set of concentric circular patterns and reflected to a 2D image plane. The relationship between the original and the deformed shape of the light patterns due to a surface shape provides sufficient 3D coordinates information. In the second, we consider system efficiency. The efficiency, which can be quantified by the size of data, is improved by reducing the number of circular patterns to be projected onto an object of interest. Akin to the Shannon-Nyquist Sampling Theorem, we derive the minimum number of circular patterns which sufficiently represents the target object with no considerable information loss. Specific geometric information (e.g. the highest curvature) of an object is key to deriving the minimum sampling density. In the third, the object, represented using the minimum number of patterns, has incomplete color information (i.e. color information is given a priori along with the curves). An interpolation is carried out to complete the photometric reconstruction. The results can be approximately reconstructed because the minimum number of the patterns may not exactly reconstruct the original object. But the result does not show considerable information loss, and the performance of an approximate reconstruction is evaluated by performing recognition or classification. In an object recognition, we use facial curves which are deformed circular curves (patterns) on a target object. We simply carry out comparison between the

  18. 3D imaging of semiconductor components by discrete laminography

    SciTech Connect

    Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  19. 3D imaging of semiconductor components by discrete laminography

    NASA Astrophysics Data System (ADS)

    Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

    2014-06-01

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  20. Needle placement for piriformis injection using 3-D imaging.

    PubMed

    Clendenen, Steven R; Candler, Shawn A; Osborne, Michael D; Palmer, Scott C; Duench, Stephanie; Glynn, Laura; Ghazi, Salim M

    2013-01-01

    Piriformis syndrome is a pain syndrome originating in the buttock and is attributed to 6% - 8% of patients referred for the treatment of back and leg pain. The treatment for piriformis syndrome using fluoroscopy, computed tomography (CT), electromyography (EMG), and ultrasound (US) has become standard practice. The treatment of Piriformis Syndrome has evolved to include fluoroscopy and EMG with CT guidance. We present a case study of 5 successful piriformis injections using 3-D computer-assisted electromagnet needle tracking coupled with ultrasound. A 6-degree of freedom electromagnetic position tracker was attached to the ultrasound probe that allowed the system to detect the position and orientation of the probe in the magnetic field. The tracked ultrasound probe was used to find the posterior superior iliac spine. Subsequently, 3 points were captured to register the ultrasound image with the CT or magnetic resonance image scan. Moreover, after the registration was obtained, the navigation system visualized the tracked needle relative to the CT scan in real-time using 2 orthogonal multi-planar reconstructions centered at the tracked needle tip. Conversely, a recent study revealed that fluoroscopically guided injections had 30% accuracy compared to ultrasound guided injections, which tripled the accuracy percentage. This novel technique exhibited an accurate needle guidance injection precision of 98% while advancing to the piriformis muscle and avoiding the sciatic nerve. The mean (± SD) procedure time was 19.08 (± 4.9) minutes. This technique allows for electromagnetic instrument tip tracking with real-time 3-D guidance to the selected target. As with any new technique, a learning curve is expected; however, this technique could offer an alternative, minimizing radiation exposure. PMID:23703429

  1. GPU-accelerated denoising of 3D magnetic resonance images

    SciTech Connect

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  2. Spectral ladar: towards active 3D multispectral imaging

    NASA Astrophysics Data System (ADS)

    Powers, Michael A.; Davis, Christopher C.

    2010-04-01

    In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.

  3. 3D segmentation of prostate ultrasound images using wavelet transform

    NASA Astrophysics Data System (ADS)

    Akbari, Hamed; Yang, Xiaofeng; Halig, Luma V.; Fei, Baowei

    2011-03-01

    The current definitive diagnosis of prostate cancer is transrectal ultrasound (TRUS) guided biopsy. However, the current procedure is limited by using 2D biopsy tools to target 3D biopsy locations. This paper presents a new method for automatic segmentation of the prostate in three-dimensional transrectal ultrasound images, by extracting texture features and by statistically matching geometrical shape of the prostate. A set of Wavelet-based support vector machines (WSVMs) are located and trained at different regions of the prostate surface. The WSVMs capture texture priors of ultrasound images for classification of the prostate and non-prostate tissues in different zones around the prostate boundary. In the segmentation procedure, these W-SVMs are trained in three sagittal, coronal, and transverse planes. The pre-trained W-SVMs are employed to tentatively label each voxel around the surface of the model as a prostate or non-prostate voxel by the texture matching. The labeled voxels in three planes after post-processing is overlaid on a prostate probability model. The probability prostate model is created using 10 segmented prostate data. Consequently, each voxel has four labels: sagittal, coronal, and transverse planes and one probability label. By defining a weight function for each labeling in each region, each voxel is labeled as a prostate or non-prostate voxel. Experimental results by using real patient data show the good performance of the proposed model in segmenting the prostate from ultrasound images.

  4. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  5. Laboratory and in-flight experiments to evaluate 3-D audio display technology

    NASA Astrophysics Data System (ADS)

    Ericson, Mark; McKinley, Richard; Kibbe, Marion; Francis, Daniel

    1994-01-01

    Laboratory and in-flight experiments were conducted to evaluate 3-D audio display technology for cockpit applications. A 3-D audio display generator was developed which digitally encodes naturally occurring direction information onto any audio signal and presents the binaural sound over headphones. The acoustic image is stabilized for head movement by use of an electromagnetic head-tracking device. In the laboratory, a 3-D audio display generator was used to spatially separate competing speech messages to improve the intelligibility of each message. Up to a 25 percent improvement in intelligibility was measured for spatially separated speech at high ambient noise levels (115 dB SPL). During the in-flight experiments, pilots reported that spatial separation of speech communications provided a noticeable improvement in intelligibility. The use of 3-D audio for target acquisition was also investigated. In the laboratory, 3-D audio enabled the acquisition of visual targets in about two seconds average response time at 17 degrees accuracy. During the in-flight experiments, pilots correctly identified ground targets 50, 75, and 100 percent of the time at separation angles of 12, 20, and 35 degrees, respectively. In general, pilot performance in the field with the 3-D audio display generator was as expected, based on data from laboratory experiments.

  6. High resolution 3D imaging of synchrotron generated microbeams

    SciTech Connect

    Gagliardi, Frank M.; Cornelius, Iwan; Blencowe, Anton; Franich, Rick D.; Geso, Moshi

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  7. 3D visualization of biomedical CT images based on OpenGL and VRML techniques

    NASA Astrophysics Data System (ADS)

    Yin, Meng; Luo, Qingming; Xia, Fuhua

    2002-04-01

    Current high-performance computers and advanced image processing capabilities have made the application of three- dimensional visualization objects in biomedical computer tomographic (CT) images facilitate the researches on biomedical engineering greatly. Trying to cooperate with the update technology using Internet, where 3D data are typically stored and processed on powerful servers accessible by using TCP/IP, we should hold the results of the isosurface be applied in medical visualization generally. Furthermore, this project is a future part of PACS system our lab is working on. So in this system we use the 3D file format VRML2.0, which is used through the Web interface for manipulating 3D models. In this program we implemented to generate and modify triangular isosurface meshes by marching cubes algorithm. Then we used OpenGL and MFC techniques to render the isosurface and manipulating voxel data. This software is more adequate visualization of volumetric data. The drawbacks are that 3D image processing on personal computers is rather slow and the set of tools for 3D visualization is limited. However, these limitations have not affected the applicability of this platform for all the tasks needed in elementary experiments in laboratory or data preprocessed.

  8. Volume estimation of tonsil phantoms using an oral camera with 3D imaging.

    PubMed

    Das, Anshuman J; Valdez, Tulio A; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C; Raskar, Ramesh

    2016-04-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky's classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667

  9. Volume estimation of tonsil phantoms using an oral camera with 3D imaging

    PubMed Central

    Das, Anshuman J.; Valdez, Tulio A.; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C.; Raskar, Ramesh

    2016-01-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky’s classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667

  10. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network

    PubMed Central

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

    2012-01-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future

  11. 3D Slicer as an image computing platform for the Quantitative Imaging Network.

    PubMed

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V; Pieper, Steve; Kikinis, Ron

    2012-11-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open-source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future

  12. A contest of sensors in close range 3D imaging: performance evaluation with a new metric test object

    NASA Astrophysics Data System (ADS)

    Hess, M.; Robson, S.; Hosseininaveh Ahmadabadian, A.

    2014-06-01

    An independent means of 3D image quality assessment is introduced, addressing non-professional users of sensors and freeware, which is largely characterized as closed-sourced and by the absence of quality metrics for processing steps, such as alignment. A performance evaluation of commercially available, state-of-the-art close range 3D imaging technologies is demonstrated with the help of a newly developed Portable Metric Test Artefact. The use of this test object provides quality control by a quantitative assessment of 3D imaging sensors. It will enable users to give precise specifications which spatial resolution and geometry recording they expect as outcome from their 3D digitizing process. This will lead to the creation of high-quality 3D digital surrogates and 3D digital assets. The paper is presented in the form of a competition of teams, and a possible winner will emerge.

  13. 3-D System-on-System (SoS) Biomedical-Imaging Architecture for Health-Care Applications.

    PubMed

    Sang-Jin Lee; Kavehei, O; Yoon-Ki Hong; Tae Won Cho; Younggap You; Kyoungrok Cho; Eshraghian, K

    2010-12-01

    This paper presents the implementation of a 3-D architecture for a biomedical-imaging system based on a multilayered system-on-system structure. The architecture consists of a complementary metal-oxide semiconductor image sensor layer, memory, 3-D discrete wavelet transform (3D-DWT), 3-D Advanced Encryption Standard (3D-AES), and an RF transmitter as an add-on layer. Multilayer silicon (Si) stacking permits fabrication and optimization of individual layers by different processing technology to achieve optimal performance. Utilization of through silicon via scheme can address required low-power operation as well as high-speed performance. Potential benefits of 3-D vertical integration include an improved form factor as well as a reduction in the total wiring length, multifunctionality, power efficiency, and flexible heterogeneous integration. The proposed imaging architecture was simulated by using Cadence Spectre and Synopsys HSPICE while implementation was carried out by Cadence Virtuoso and Mentor Graphic Calibre. PMID:23853380

  14. 3D optical sectioning with a new hyperspectral confocal fluorescence imaging system.

    SciTech Connect

    Nieman, Linda T.; Sinclair, Michael B.; Davidson, George S.; Van Benthem, Mark Hilary; Haaland, David Michael; Timlin, Jerilyn Ann; Sasaki, Darryl Yoshio; Bachand, George David; Jones, Howland D. T.

    2007-02-01

    A novel hyperspectral fluorescence microscope for high-resolution 3D optical sectioning of cells and other structures has been designed, constructed, and used to investigate a number of different problems. We have significantly extended new multivariate curve resolution (MCR) data analysis methods to deconvolve the hyperspectral image data and to rapidly extract quantitative 3D concentration distribution maps of all emitting species. The imaging system has many advantages over current confocal imaging systems including simultaneous monitoring of numerous highly overlapped fluorophores, immunity to autofluorescence or impurity fluorescence, enhanced sensitivity, and dramatically improved accuracy, reliability, and dynamic range. Efficient data compression in the spectral dimension has allowed personal computers to perform quantitative analysis of hyperspectral images of large size without loss of image quality. We have also developed and tested software to perform analysis of time resolved hyperspectral images using trilinear multivariate analysis methods. The new imaging system is an enabling technology for numerous applications including (1) 3D composition mapping analysis of multicomponent processes occurring during host-pathogen interactions, (2) monitoring microfluidic processes, (3) imaging of molecular motors and (4) understanding photosynthetic processes in wild type and mutant Synechocystis cyanobacteria.

  15. ROIC for gated 3D imaging LADAR receiver

    NASA Astrophysics Data System (ADS)

    Chen, Guoqiang; Zhang, Junling; Wang, Pan; Zhou, Jie; Gao, Lei; Ding, Ruijun

    2013-09-01

    Time of flight laser range finding, deep space communications and scanning video imaging are three applications requiring very low noise optical receivers to achieve detection of fast and weak optical signal. HgCdTe electrons initiated avalanche photodiodes (e-APDs) in linear multiplication mode is the detector of choice thanks to its high quantum efficiency, high gain at low bias, high bandwidth and low noise factor. In this project, a readout integrated circuit of hybrid e-APD focal plane array (FPA) with 100um pitch for 3D-LADAR was designed for gated optical receiver. The ROIC works at 77K, including unit cell circuit, column-level circuit, timing control, bias circuit and output driver. The unit cell circuit is a key component, which consists of preamplifier, correlated double Sampling (CDS), bias circuit and timing control module. Specially, the preamplifier used the capacitor feedback transimpedance amplifier (CTIA) structure which has two capacitors to offer switchable capacitance for passive/active dual mode imaging. The main circuit of column-level circuit is a precision Multiply-by-Two circuit which is implemented by switched-capacitor circuit. Switched-capacitor circuit is quite suitable for the signal processing of readout integrated circuit (ROIC) due to the working characteristics. The output driver uses a simply unity-gain buffer. Because the signal is amplified in column-level circuit, the amplifier in unity-gain buffer uses a rail-rail amplifier. In active imaging mode, the integration time is 80ns. Integrating current from 200nA to 4uA, this circuit shows the nonlinearity is less than 1%. In passive imaging mode, the integration time is 150ns. Integrating current from 1nA to 20nA shows the nonlinearity less than 1%.

  16. 3D Seismic Imaging over a Potential Collapse Structure

    NASA Astrophysics Data System (ADS)

    Gritto, Roland; O'Connell, Daniel; Elobaid Elnaiem, Ali; Mohamed, Fathelrahman; Sadooni, Fadhil

    2016-04-01

    The Middle-East has seen a recent boom in construction including the planning and development of complete new sub-sections of metropolitan areas. Before planning and construction can commence, however, the development areas need to be investigated to determine their suitability for the planned project. Subsurface parameters such as the type of material (soil/rock), thickness of top soil or rock layers, depth and elastic parameters of basement, for example, comprise important information needed before a decision concerning the suitability of the site for construction can be made. A similar problem arises in environmental impact studies, when subsurface parameters are needed to assess the geological heterogeneity of the subsurface. Environmental impact studies are typically required for each construction project, particularly for the scale of the aforementioned building boom in the Middle East. The current study was conducted in Qatar at the location of a future highway interchange to evaluate a suite of 3D seismic techniques in their effectiveness to interrogate the subsurface for the presence of karst-like collapse structures. The survey comprised an area of approximately 10,000 m2 and consisted of 550 source- and 192 receiver locations. The seismic source was an accelerated weight drop while the geophones consisted of 3-component 10 Hz velocity sensors. At present, we analyzed over 100,000 P-wave phase arrivals and performed high-resolution 3-D tomographic imaging of the shallow subsurface. Furthermore, dispersion analysis of recorded surface waves will be performed to obtain S-wave velocity profiles of the subsurface. Both results, in conjunction with density estimates, will be utilized to determine the elastic moduli of the subsurface rock layers.

  17. Development of ceramic-reinforced photopolymers for SLA 3D printing technology

    NASA Astrophysics Data System (ADS)

    Yun, Ji Sun; Park, Tae-Wan; Jeong, Young Hun; Cho, Jeong Ho

    2016-06-01

    Al2O3 ceramic-reinforced photopolymer samples for SLA 3D printing technology were prepared using a silane coupling agent (VTES, vinyltriethoxysilane). Depending on the method used to coat the VTES onto the ceramic surface, the dispersion of ceramic particles in the photopolymer solution was remarkably improved. SEM, TEM and element mapping images showed Al2O3 particles well wrapped with VTES along with well-distributed Al2O3 particles overall on the cross-sectional surfaces of 3D-printed objects. The tensile properties (stress-strain curves) of 3D-printed objects of the ceramic-reinforced photopolymer were investigated as a function of the Al2O3 ceramic content when it ranged from 0 to 20 wt%. The results demonstrate that an Al2O3 ceramic content of 15 wt% resulted in enhanced tensile characteristics.

  18. 3D imaging of enzymes working in situ.

    PubMed

    Jamme, F; Bourquin, D; Tawil, G; Viksø-Nielsen, A; Buléon, A; Réfrégiers, M

    2014-06-01

    Today, development of slowly digestible food with positive health impact and production of biofuels is a matter of intense research. The latter is achieved via enzymatic hydrolysis of starch or biomass such as lignocellulose. Free label imaging, using UV autofluorescence, provides a great tool to follow one single enzyme when acting on a non-UV-fluorescent substrate. In this article, we report synchrotron DUV fluorescence in 3-dimensional imaging to visualize in situ the diffusion of enzymes on solid substrate. The degradation pathway of single starch granules by two amylases optimized for biofuel production and industrial starch hydrolysis was followed by tryptophan autofluorescence (excitation at 280 nm, emission filter at 350 nm). The new setup has been specially designed and developed for a 3D representation of the enzyme-substrate interaction during hydrolysis. Thus, this tool is particularly effective for improving knowledge and understanding of enzymatic hydrolysis of solid substrates such as starch and lignocellulosic biomass. It could open up the way to new routes in the field of green chemistry and sustainable development, that is, in biotechnology, biorefining, or biofuels. PMID:24796213

  19. 3D cutting tool inspection system and its key technologies

    NASA Astrophysics Data System (ADS)

    Du, X. M.; Chen, T.; Zou, X. J.; Harding, K. G.

    2009-08-01

    Cutting tools are an essential component used in manufacturing parts for different products. Many cutting tools are manufactured with complex geometric shapes and sharp and/or curved edges. As such, maintaining quality control of cutting tools during their fabrication may be essential to controlling the quality of components manufactured using the cutting tools. In this paper, a 3D cutting tool inspection system, is presented. The architecture of the system, the cutter inspection workflow and some key technologies are discussed. The relative key technologies include two aspects. The first aspect is the system extrinsic self-calibration method for ensuring the system accuracy. This paper will elaborate on how to calibrate the orientation and location of the rotary stage in the coordination system, including the relative relationship between the axis of the chuck used to hold the tool and the rotary axis used to position the tool, along with the relative relationship between Z stage and rotary axis. Further, this paper will analyze self-calibration solutions for separately correcting the error of the squareness and optical measuring beam and the error of the alignment between a side scan and a tip scan. The second aspect this paper will address is a method of scan planning for automatic and effective data collection. Tool measurement planning plays a big role in saving tool measurement time, improving data accuracy, as well as ensuring data completeness. Ths paper will present a round-part oriented measurement method that includes coarse/fine section scans that aim at getting 2D section geometry in a progressive manner, covering the key sharp/curved edge areas, and the side helical scan combined with the tip round scan for shape-simulated full geometry capture. Finally, this paper will present experimental results and some field tests data.

  20. 3D Cryo-Imaging: A Very High-Resolution View of the Whole Mouse

    PubMed Central

    Roy, Debashish; Steyer, Grant J.; Gargesha, Madhusudhana; Stone, Meredith E.; Wilson, David L.

    2009-01-01

    We developed the Case Cryo-imaging system that provides information rich, very high-resolution, color brightfield, and molecular fluorescence images of a whole mouse using a section-and-image block-face imaging technology. The system consists of a mouse-sized, motorized cryo-microtome with special features for imaging, a modified, brightfield/ fluorescence microscope, and a robotic xyz imaging system positioner, all of which is fully automated by a control system. Using the robotic system, we acquired microscopic tiled images at a pixel size of 15.6 µm over the block face of a whole mouse sectioned at 40 µm, with a total data volume of 55 GB. Viewing 2D images at multiple resolutions, we identified small structures such as cardiac vessels, muscle layers, villi of the small intestine, the optic nerve, and layers of the eye. Cryo-imaging was also suitable for imaging embryo mutants in 3D. A mouse, in which enhanced green fluorescent protein was expressed under gamma actin promoter in smooth muscle cells, gave clear 3D views of smooth muscle in the urogenital and gastrointestinal tracts. With cryo-imaging, we could obtain 3D vasculature down to 10 µm, over very large regions of mouse brain. Software is fully automated with fully programmable imaging/sectioning protocols, email notifications, and automatic volume visualization. With a unique combination of field-of-view, depth of field, contrast, and resolution, the Case Cryo-imaging system fills the gap between whole animal in vivo imaging and histology. PMID:19248166

  1. A survey among Brazilian thoracic surgeons about the use of preoperative 2D and 3D images

    PubMed Central

    Cipriano, Federico Enrique Garcia; Arcêncio, Livia; Dessotte, Lycio Umeda; Rodrigues, Alfredo José; Vicente, Walter Villela de Andrade

    2016-01-01

    Background Describe the characteristics of how the thoracic surgeon uses the 2D/3D medical imaging to perform surgical planning, clinical practice and teaching in thoracic surgery and check the initial choice and the final choice of the Brazilian Thoracic surgeon as the 2D and 3D models pictures before and after acquiring theoretical knowledge on the generation, manipulation and interactive 3D views. Methods A descriptive research type Survey cross to data provided by the Brazilian Thoracic Surgeons (members of the Brazilian Society of Thoracic Surgery) who responded to the online questionnaire via the internet on their computers or personal devices. Results Of the 395 invitations visualized distributed by email, 107 surgeons completed the survey. There was no statically difference when comparing the 2D vs. 3D models pictures for the following purposes: diagnosis, assessment of the extent of disease, preoperative surgical planning, and communication among physicians, resident training, and undergraduate medical education. Regarding the type of tomographic image display routinely used in clinical practice (2D or 3D or 2D–3D model image) and the one preferred by the surgeon at the end of the questionnaire. Answers surgeons for exclusive use of 2D images: initial choice =50.47% and preferably end =14.02%. Responses surgeons to use 3D models in combination with 2D images: initial choice =48.60% and preferably end =85.05%. There was a significant change in the final selection of 3D models used together with the 2D images (P<0.0001). Conclusions There is a lack of knowledge of the 3D imaging, as well as the use and interactive manipulation in dedicated 3D applications, with consequent lack of uniformity in the surgical planning based on CT images. These findings certainly confirm in changing the preference of thoracic surgeons of 2D views of technologies for 3D images. PMID:27621874

  2. Improvements of 3-D image quality in integral display by reducing distortion errors

    NASA Astrophysics Data System (ADS)

    Kawakita, Masahiro; Sasaki, Hisayuki; Arai, Jun; Okano, Fumio; Suehiro, Koya; Haino, Yasuyuki; Yoshimura, Makoto; Sato, Masahito

    2008-02-01

    An integral three-dimensional (3-D) system based on the principle of integral photography can display natural 3-D images. We studied ways of improving the resolution and viewing angle of 3-D images by using extremely highresolution (EHR) video in an integral 3-D video system. One of the problems with the EHR projection-type integral 3-D system is that positional errors appear between the elemental image and the elemental lens when there is geometric distortion in the projected image. We analyzed the relationships between the geometric distortion in the elemental images caused by the projection lens and the spatial distortion of the reconstructed 3-D image. As a result, we clarified that 3-D images reconstructed far from the lens array were greatly affected by the distortion of the elemental images, and that the 3-D images were significantly distorted in the depth direction at the corners of the displayed images. Moreover, we developed a video signal processor that electrically compensated the distortion in the elemental images for an EHR projection-type integral 3-D system. Therefore, the distortion in the displayed 3-D image was removed, and the viewing angle of the 3-D image was expanded to nearly double that obtained with the previous prototype system.

  3. Transmission of holographic 3D images using infrared transmitter(II): on a study of transmission of holographic 3D images using infrared transmitter safe to medical equipment

    NASA Astrophysics Data System (ADS)

    Takano, Kunihiko; Muto, Kenji; Tian, Lan; Sato, Koki

    2007-09-01

    An infrared transmitting technique for 3D holographic images is studied. It seems to be very effective as a transmitting technique for 3D holographic images in the places where electric wave is prohibited to be used for transmission. In this paper, we first explain our infrared transmitting system for holograms and a display system for the presentation of holographic 3D images reconstructed from the received signal. Next, we make a report on the results obtained by infrared transmission of CGH and a comparison of the real and the reconstructed 3D images in our system. As this result, it is found that reconstructed holographic 3D images do not suffer a large deterioration in the quality and highly contrasted ones can be presented.

  4. Generation of Multi-Scale Vascular Network System within 3D Hydrogel using 3D Bio-Printing Technology.

    PubMed

    Lee, Vivian K; Lanzi, Alison M; Haygan, Ngo; Yoo, Seung-Schik; Vincent, Peter A; Dai, Guohao

    2014-09-01

    Although 3D bio-printing technology has great potential in creating complex tissues with multiple cell types and matrices, maintaining the viability of thick tissue construct for tissue growth and maturation after the printing is challenging due to lack of vascular perfusion. Perfused capillary network can be a solution for this issue; however, construction of a complete capillary network at single cell level using the existing technology is nearly impossible due to limitations in time and spatial resolution of the dispensing technology. To address the vascularization issue, we developed a 3D printing method to construct larger (lumen size of ~1mm) fluidic vascular channels and to create adjacent capillary network through a natural maturation process, thus providing a feasible solution to connect the capillary network to the large perfused vascular channels. In our model, microvascular bed was formed in between two large fluidic vessels, and then connected to the vessels by angiogenic sprouting from the large channel edge. Our bio-printing technology has a great potential in engineering vascularized thick tissues and vascular niches, as the vascular channels are simultaneously created while cells and matrices are printed around the channels in desired 3D patterns. PMID:25484989

  5. Generation of Multi-Scale Vascular Network System within 3D Hydrogel using 3D Bio-Printing Technology

    PubMed Central

    Lee, Vivian K.; Lanzi, Alison M.; Haygan, Ngo; Yoo, Seung-Schik; Vincent, Peter A.; Dai, Guohao

    2014-01-01

    Although 3D bio-printing technology has great potential in creating complex tissues with multiple cell types and matrices, maintaining the viability of thick tissue construct for tissue growth and maturation after the printing is challenging due to lack of vascular perfusion. Perfused capillary network can be a solution for this issue; however, construction of a complete capillary network at single cell level using the existing technology is nearly impossible due to limitations in time and spatial resolution of the dispensing technology. To address the vascularization issue, we developed a 3D printing method to construct larger (lumen size of ~1mm) fluidic vascular channels and to create adjacent capillary network through a natural maturation process, thus providing a feasible solution to connect the capillary network to the large perfused vascular channels. In our model, microvascular bed was formed in between two large fluidic vessels, and then connected to the vessels by angiogenic sprouting from the large channel edge. Our bio-printing technology has a great potential in engineering vascularized thick tissues and vascular niches, as the vascular channels are simultaneously created while cells and matrices are printed around the channels in desired 3D patterns. PMID:25484989

  6. Segmentation of Blood Vessels and 3D Representation of CMR Image

    NASA Astrophysics Data System (ADS)

    Jiji, G. W.

    2013-06-01

    Current cardiac magnetic resonance imaging (CMR) technology allows the determination of patient-individual coronary tree structure, detection of infarctions, and assessment of myocardial perfusion. The purpose of this work is to segment heart blood vessels and visualize it in 3D. In this work, 3D visualisation of vessel was performed into four phases. The first step is to detect the tubular structures using multiscale medialness function, which distinguishes tube-like structures from and other structures. Second step is to extract the centrelines of the tubes. From the centreline radius the cylindrical tube model is constructed. The third step is segmentation of the tubular structures. The cylindrical tube model is used in segmentation process. Fourth step is to 3D representation of the tubular structure using Volume . The proposed approach is applied to 10 datasets of patients from the clinical routine and tested the results with radiologists.

  7. 3D imaging of nanomaterials by discrete tomography.

    PubMed

    Batenburg, K J; Bals, S; Sijbers, J; Kübel, C; Midgley, P A; Hernandez, J C; Kaiser, U; Encina, E R; Coronado, E A; Van Tendeloo, G

    2009-05-01

    The field of discrete tomography focuses on the reconstruction of samples that consist of only a few different materials. Ideally, a three-dimensional (3D) reconstruction of such a sample should contain only one grey level for each of the compositions in the sample. By exploiting this property in the reconstruction algorithm, either the quality of the reconstruction can be improved significantly, or the number of required projection images can be reduced. The discrete reconstruction typically contains fewer artifacts and does not have to be segmented, as it already contains one grey level for each composition. Recently, a new algorithm, called discrete algebraic reconstruction technique (DART), has been proposed that can be used effectively on experimental electron tomography datasets. In this paper, we propose discrete tomography as a general reconstruction method for electron tomography in materials science. We describe the basic principles of DART and show that it can be applied successfully to three different types of samples, consisting of embedded ErSi(2) nanocrystals, a carbon nanotube grown from a catalyst particle and a single gold nanoparticle, respectively. PMID:19269094

  8. 3D Soil Images Structure Quantification using Relative Entropy

    NASA Astrophysics Data System (ADS)

    Tarquis, A. M.; Gonzalez-Nieto, P. L.; Bird, N. R. A.

    2012-04-01

    Soil voids manifest the cumulative effect of local pedogenic processes and ultimately influence soil behavior - especially as it pertains to aeration and hydrophysical properties. Because of the relatively weak attenuation of X-rays by air, compared with liquids or solids, non-disruptive CT scanning has become a very attractive tool for generating three-dimensional imagery of soil voids. One of the main steps involved in this analysis is the thresholding required to transform the original (greyscale) images into the type of binary representation (e.g., pores in white, solids in black) needed for fractal analysis or simulation with Lattice-Boltzmann models (Baveye et al., 2010). The objective of the current work is to apply an innovative approach to quantifying soil voids and pore networks in original X-ray CT imagery using Relative Entropy (Bird et al., 2006; Tarquis et al., 2008). These will be illustrated using typical imagery representing contrasting soil structures. Particular attention will be given to the need to consider the full 3D context of the CT imagery, as well as scaling issues, in the application and interpretation of this index.

  9. Image-Based 3d Reconstruction and Analysis for Orthodontia

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.

    2012-08-01

    Among the main tasks of orthodontia are analysis of teeth arches and treatment planning for providing correct position for every tooth. The treatment plan is based on measurement of teeth parameters and designing perfect teeth arch curve which teeth are to create after treatment. The most common technique for teeth moving uses standard brackets which put on teeth and a wire of given shape which is clamped by these brackets for producing necessary forces to every tooth for moving it in given direction. The disadvantages of standard bracket technique are low accuracy of tooth dimensions measurements and problems with applying standard approach for wide variety of complex orthodontic cases. The image-based technique for orthodontic planning, treatment and documenting aimed at overcoming these disadvantages is proposed. The proposed approach provides performing accurate measurements of teeth parameters needed for adequate planning, designing correct teeth position and monitoring treatment process. The developed technique applies photogrammetric means for teeth arch 3D model generation, brackets position determination and teeth shifting analysis.

  10. 3D Imaging of Nanoparticle Distribution in Biological Tissue by Laser-Induced Breakdown Spectroscopy

    NASA Astrophysics Data System (ADS)

    Gimenez, Y.; Busser, B.; Trichard, F.; Kulesza, A.; Laurent, J. M.; Zaun, V.; Lux, F.; Benoit, J. M.; Panczer, G.; Dugourd, P.; Tillement, O.; Pelascini, F.; Sancey, L.; Motto-Ros, V.

    2016-07-01

    Nanomaterials represent a rapidly expanding area of research with huge potential for future medical applications. Nanotechnology indeed promises to revolutionize diagnostics, drug delivery, gene therapy, and many other areas of research. For any biological investigation involving nanomaterials, it is crucial to study the behavior of such nano-objects within tissues to evaluate both their efficacy and their toxicity. Here, we provide the first account of 3D label-free nanoparticle imaging at the entire-organ scale. The technology used is known as laser-induced breakdown spectroscopy (LIBS) and possesses several advantages such as speed of operation, ease of use and full compatibility with optical microscopy. We then used two different but complementary approaches to achieve 3D elemental imaging with LIBS: a volume reconstruction of a sliced organ and in-depth analysis. This proof-of-concept study demonstrates the quantitative imaging of both endogenous and exogenous elements within entire organs and paves the way for innumerable applications.

  11. Automated 3D renal segmentation based on image partitioning

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  12. Estimating Fiber Orientation Distribution Functions in 3D-Polarized Light Imaging.

    PubMed

    Axer, Markus; Strohmer, Sven; Gräßel, David; Bücker, Oliver; Dohmen, Melanie; Reckfort, Julia; Zilles, Karl; Amunts, Katrin

    2016-01-01

    Research of the human brain connectome requires multiscale approaches derived from independent imaging methods ideally applied to the same object. Hence, comprehensible strategies for data integration across modalities and across scales are essential. We have successfully established a concept to bridge the spatial scales from microscopic fiber orientation measurements based on 3D-Polarized Light Imaging (3D-PLI) to meso- or macroscopic dimensions. By creating orientation distribution functions (pliODFs) from high-resolution vector data via series expansion with spherical harmonics utilizing high performance computing and supercomputing technologies, data fusion with Diffusion Magnetic Resonance Imaging has become feasible, even for a large-scale dataset such as the human brain. Validation of our approach was done effectively by means of two types of datasets that were transferred from fiber orientation maps into pliODFs: simulated 3D-PLI data showing artificial, but clearly defined fiber patterns and real 3D-PLI data derived from sections through the human brain and the brain of a hooded seal. PMID:27147981

  13. Estimating Fiber Orientation Distribution Functions in 3D-Polarized Light Imaging

    PubMed Central

    Axer, Markus; Strohmer, Sven; Gräßel, David; Bücker, Oliver; Dohmen, Melanie; Reckfort, Julia; Zilles, Karl; Amunts, Katrin

    2016-01-01

    Research of the human brain connectome requires multiscale approaches derived from independent imaging methods ideally applied to the same object. Hence, comprehensible strategies for data integration across modalities and across scales are essential. We have successfully established a concept to bridge the spatial scales from microscopic fiber orientation measurements based on 3D-Polarized Light Imaging (3D-PLI) to meso- or macroscopic dimensions. By creating orientation distribution functions (pliODFs) from high-resolution vector data via series expansion with spherical harmonics utilizing high performance computing and supercomputing technologies, data fusion with Diffusion Magnetic Resonance Imaging has become feasible, even for a large-scale dataset such as the human brain. Validation of our approach was done effectively by means of two types of datasets that were transferred from fiber orientation maps into pliODFs: simulated 3D-PLI data showing artificial, but clearly defined fiber patterns and real 3D-PLI data derived from sections through the human brain and the brain of a hooded seal. PMID:27147981

  14. New solutions and applications of 3D computer tomography image processing

    NASA Astrophysics Data System (ADS)

    Effenberger, Ira; Kroll, Julia; Verl, Alexander

    2008-02-01

    As nowadays the industry aims at fast and high quality product development and manufacturing processes a modern and efficient quality inspection is essential. Compared to conventional measurement technologies, industrial computer tomography (CT) is a non-destructive technology for 3D-image data acquisition which helps to overcome their disadvantages by offering the possibility to scan complex parts with all outer and inner geometric features. In this paper new and optimized methods for 3D image processing, including innovative ways of surface reconstruction and automatic geometric feature detection of complex components, are presented, especially our work of developing smart online data processing and data handling methods, with an integrated intelligent online mesh reduction. Hereby the processing of huge and high resolution data sets is guaranteed. Besides, new approaches for surface reconstruction and segmentation based on statistical methods are demonstrated. On the extracted 3D point cloud or surface triangulation automated and precise algorithms for geometric inspection are deployed. All algorithms are applied to different real data sets generated by computer tomography in order to demonstrate the capabilities of the new tools. Since CT is an emerging technology for non-destructive testing and inspection more and more industrial application fields will use and profit from this new technology.

  15. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    SciTech Connect

    Wong, S.T.C.

    1997-02-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.

  16. 3D hydrodynamic focusing microfluidics for emerging sensing technologies.

    PubMed

    Daniele, Michael A; Boyd, Darryl A; Mott, David R; Ligler, Frances S

    2015-05-15

    While the physics behind laminar flows has been studied for 200 years, understanding of how to use parallel flows to augment the capabilities of microfluidic systems has been a subject of study primarily over the last decade. The use of one flow to focus another within a microfluidic channel has graduated from a two-dimensional to a three-dimensional process and the design principles are only now becoming established. This review explores the underlying principles for hydrodynamic focusing in three dimensions (3D) using miscible fluids and the application of these principles for creation of biosensors, separation of cells and particles for sample manipulation, and fabrication of materials that could be used for biosensors. Where sufficient information is available, the practicality of devices implementing fluid flows directed in 3D is evaluated and the advantages and limitations of 3D hydrodynamic focusing for the particular application are highlighted. PMID:25041926

  17. P-Cable: New High-Resolution 3D Seismic Acquisition Technology

    NASA Astrophysics Data System (ADS)

    Planke, Sverre; Berndt, Christian; Mienert, Jürgen; Bünz, Stefan; Eriksen, Frode N.; Eriksen, Ola K.

    2010-05-01

    We have developed a new cost-efficient technology for acquisition of high-resolution 3D seismic data: the P-Cable system. This technology is very well suited for deep water exploration, site surveys, and studies of shallow gas and fluid migration associated with gas hydrates or leaking reservoirs. It delivers unparalleled 3D seismic images of subsurface sediment architectures. The P-Cable system consists of a seismic cable towed perpendicular to a vessel's steaming direction. This configuration allows us to image an up to 150 m wide swath of the sub-surface for each sail line. Conventional 3D seismic technology relies on several very long streamers (up to 10 km long streamers are common), large sources, and costly operations. In contrast, the P-Cable system is light-weight and fast to deploy from small vessels. Only a small source is required as the system is made for relatively shallow imaging, typically above the first water-bottom multiple. The P-Cable system is particularly useful for acquisition of small 3D cubes, 10-50 km2, in focus areas, rather than extensive mapping of large regions. The rapid deployment and recovery of the system makes it possible to acquire several small cubes (10 to 30 km2) with high-resolution (50-250 Hz) seismic data in during one cruise. The first development of the P-Cable system was a cooperative project achieved by Volcanic Basin Petroleum Research (VBPR), University of Tromsø, National Oceanography Centre, Southampton, and industry partners. Field trials using a 12-streamer system were conducted on sites with active fluid-leakage systems on the Norwegian-Barents-Svalbard margin, the Gulf of Cadiz, and the Mediterranean. The second phase of the development introduced digital streamers. The new P-Cable2 system also includes integrated tow and cross cables for power and data transmission and improved doors to spread the larger cross cable. This digital system has been successfully used during six cruises by the University of Troms

  18. 3D in vivo optical skin imaging for intense pulsed light and fractional ablative resurfacing of photodamaged skin.

    PubMed

    Clementoni, Matteo Tretti; Lavagno, Rosalia; Catenacci, Maximilian; Kantor, Roman; Mariotto, Guido; Shvets, Igor

    2011-11-01

    The authors present a 3D in vivo imaging system used to assess the effectiveness of IPL and fractional laser treatments of photodamaged skin. Preoperative and postoperative images of patients treated with these procedures are analyzed and demonstrate the superior ability of this 3D technology to reveal decrease in vascularity and improvement in melanin distribution and calculate the degree of individual deep wrinkles before and after treatment. PMID:22004864

  19. NDE of spacecraft materials using 3D Compton backscatter x-ray imaging

    NASA Astrophysics Data System (ADS)

    Burke, E. R.; Grubsky, V.; Romanov, V.; Shoemaker, K.

    2016-02-01

    We present the results of testing of the NDE performance of a Compton Imaging Tomography (CIT) system for single-sided, penetrating 3D inspection. The system was recently developed by Physical Optics Corporation (POC) and delivered to NASA for testing and evaluation. The CIT technology is based on 3D structure mapping by collecting the information on density profiles in multiple object cross sections through hard x-ray Compton backscatter imaging. The individual cross sections are processed and fused together in software, generating a 3D map of the density profile of the object which can then be analyzed slice-by-slice in x, y, or z directions. The developed CIT scanner is based on a 200-kV x-ray source, flat-panel x-ray detector (FPD), and apodized x-ray imaging optics. The CIT technology is particularly well suited to the NDE of lightweight aerospace materials, such as the thermal protection system (TPS) ceramic and composite materials, micrometeoroid and orbital debris (MMOD) shielding, spacecraft pressure walls, inflatable habitat structures, composite overwrapped pressure vessels (COPVs), and aluminum honeycomb materials. The current system provides 3D localization of defects and features with field of view 20x12x8 cm3 and spatial resolution ˜2 mm. In this paper, we review several aerospace NDE applications of the CIT technology, with particular emphasis on TPS. Based on the analysis of the testing results, we provide recommendations for continued development on TPS applications that can benefit the most from the unique capabilities of this new NDE technology.

  20. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  1. 3-D Adaptive Sparsity Based Image Compression With Applications to Optical Coherence Tomography.

    PubMed

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A; Farsiu, Sina

    2015-06-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  2. 3-D Adaptive Sparsity Based Image Compression with Applications to Optical Coherence Tomography

    PubMed Central

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A.; Farsiu, Sina

    2015-01-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  3. Dense 3d Point Cloud Generation from Uav Images from Image Matching and Global Optimazation

    NASA Astrophysics Data System (ADS)

    Rhee, S.; Kim, T.

    2016-06-01

    3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.

  4. Segmented images and 3D images for studying the anatomical structures in MRIs

    NASA Astrophysics Data System (ADS)

    Lee, Yong Sook; Chung, Min Suk; Cho, Jae Hyun

    2004-05-01

    For identifying the pathological findings in MRIs, the anatomical structures in MRIs should be identified in advance. For studying the anatomical structures in MRIs, an education al tool that includes the horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is necessary. Such an educational tool, however, is hard to obtain. Therefore, in this research, such an educational tool which helps medical students and doctors study the anatomical structures in MRIs was made as follows. A healthy, young Korean male adult with standard body shape was selected. Six hundred thirteen horizontal MRIs of the entire body were scanned and inputted to the personal computer. Sixty anatomical structures in the horizontal MRIs were segmented to make horizontal segmented images. Coronal, sagittal MRIs and coronal, sagittal segmented images were made. 3D images of anatomical structures in the segmented images were reconstructed by surface rendering method. Browsing software of the MRIs, segmented images, and 3D images was composed. This educational tool that includes horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is expected to help medical students and doctors study anatomical structures in MRIs.

  5. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    NASA Astrophysics Data System (ADS)

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  6. Assessment of rhinoplasty techniques by overlay of before-and-after 3D images.

    PubMed

    Toriumi, Dean M; Dixon, Tatiana K

    2011-11-01

    This article describes the equipment and software used to create facial 3D imaging and discusses the validation and reliability of the objective assessments done using this equipment. By overlaying preoperative and postoperative 3D images, it is possible to assess the surgical changes in 3D. Methods are described to assess the 3D changes from the rhinoplasty techniques of nasal dorsal augmentation, increasing tip projection, narrowing the nose, and nasal lengthening. PMID:22004862

  7. Concealed threat detection with the IRAD sub-millimeter wave 3D imaging radar

    NASA Astrophysics Data System (ADS)

    Robertson, Duncan A.; Cassidy, Scott L.; Jones, Ben; Clark, Anthony

    2014-06-01

    Sub-millimeter wave 3D imaging radar is a promising technology for the stand-off detection of threats concealed on people. The IRAD 340 GHz 3D imaging radar uses polarization intensity information to identify signatures associated with concealed threats. We report on an extensive trials program which has been carried out involving dozens of individual subjects wearing a variety of different clothing to evaluate the detection of a wide range of threat and benign items. We have developed an automatic algorithm to run on the radar which yields a level of anomaly indication in real time. Statistical analysis of the large volume of recorded data has enabled performance metrics for the radar system to be evaluated.

  8. MMW and THz images denoising based on adaptive CBM3D

    NASA Astrophysics Data System (ADS)

    Dai, Li; Zhang, Yousai; Li, Yuanjiang; Wang, Haoxiang

    2014-04-01

    Over the past decades, millimeter wave and terahertz radiation has received a lot of interest due to advances in emission and detection technologies which allowed the widely application of the millimeter wave and terahertz imaging technology. This paper focuses on solving the problem of this sort of images existing stripe noise, block effect and other interfered information. A new kind of nonlocal average method is put forward. Suitable level Gaussian noise is added to resonate with the image. Adaptive color block-matching 3D filtering is used to denoise. Experimental results demonstrate that it improves the visual effect and removes interference at the same time, making the analysis of the image and target detection more easily.

  9. 3D imaging of translucent media with a plenoptic sensor based on phase space optics

    NASA Astrophysics Data System (ADS)

    Zhang, Xuanzhe; Shu, Bohong; Du, Shaojun

    2015-05-01

    Traditional stereo imaging technology is not working for dynamical translucent media, because there are no obvious characteristic patterns on it and it's not allowed using multi-cameras in most cases, while phase space optics can solve the problem, extracting depth information directly from "space-spatial frequency" distribution of the target obtained by plenoptic sensor with single lens. This paper discussed the presentation of depth information in phase space data, and calculating algorithms with different transparency. A 3D imaging example of waterfall was given at last.

  10. Implementation of wireless 3D stereo image capture system and 3D exaggeration algorithm for the region of interest

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Badarch, Luubaatar

    2015-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We aslo comments on the GPU hardware and CUDA programming for implementation of 3D exaggeraion algorithm for ROI by adjusting and synthesizing the disparity value of ROI (region of interest) in real time. We comment on the pattern of aperture for deblurring of CMOS camera module based on the Kirchhoff diffraction formula and clarify the reason why we can get more sharp and clear image by blocking some portion of aperture or geometric sampling. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  11. Four-view stereoscopic imaging and display system for web-based 3D image communication

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Cheol; Park, Young-Gyoo; Kim, Eun-Soo

    2004-10-01

    In this paper, a new software-oriented autostereoscopic 4-view imaging & display system for web-based 3D image communication is implemented by using 4 digital cameras, Intel Xeon server computer system, graphic card having four outputs, projection-type 4-view 3D display system and Microsoft' DirectShow programming library. And its performance is also analyzed in terms of image-grabbing frame rates, displayed image resolution, possible color depth and number of views. From some experimental results, it is found that the proposed system can display 4-view VGA images with a full color of 16bits and a frame rate of 15fps in real-time. But the image resolution, color depth, frame rate and number of views are mutually interrelated and can be easily controlled in the proposed system by using the developed software program so that, a lot of flexibility in design and implementation of the proposed multiview 3D imaging and display system are expected in the practical application of web-based 3D image communication.

  12. Imaging 3D strain field monitoring during hydraulic fracturing processes

    NASA Astrophysics Data System (ADS)

    Chen, Rongzhang; Zaghloul, Mohamed A. S.; Yan, Aidong; Li, Shuo; Lu, Guanyi; Ames, Brandon C.; Zolfaghari, Navid; Bunger, Andrew P.; Li, Ming-Jun; Chen, Kevin P.

    2016-05-01

    In this paper, we present a distributed fiber optic sensing scheme to study 3D strain fields inside concrete cubes during hydraulic fracturing process. Optical fibers embedded in concrete were used to monitor 3D strain field build-up with external hydraulic pressures. High spatial resolution strain fields were interrogated by the in-fiber Rayleigh backscattering with 1-cm spatial resolution using optical frequency domain reflectometry. The fiber optics sensor scheme presented in this paper provides scientists and engineers a unique laboratory tool to understand the hydraulic fracturing processes in various rock formations and its impacts to environments.

  13. Real-time 3D image reconstruction guidance in liver resection surgery

    PubMed Central

    Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques

    2014-01-01

    Background Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. Methods From a patient’s medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon’s intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. Results From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Conclusions Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this

  14. 3D fingerprint imaging system based on full-field fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  15. Image-based RSA: Roentgen stereophotogrammetric analysis based on 2D-3D image registration.

    PubMed

    de Bruin, P W; Kaptein, B L; Stoel, B C; Reiber, J H C; Rozing, P M; Valstar, E R

    2008-01-01

    Image-based Roentgen stereophotogrammetric analysis (IBRSA) integrates 2D-3D image registration and conventional RSA. Instead of radiopaque RSA bone markers, IBRSA uses 3D CT data, from which digitally reconstructed radiographs (DRRs) are generated. Using 2D-3D image registration, the 3D pose of the CT is iteratively adjusted such that the generated DRRs resemble the 2D RSA images as closely as possible, according to an image matching metric. Effectively, by registering all 2D follow-up moments to the same 3D CT, the CT volume functions as common ground. In two experiments, using RSA and using a micromanipulator as gold standard, IBRSA has been validated on cadaveric and sawbone scapula radiographs, and good matching results have been achieved. The accuracy was: |mu |< 0.083 mm for translations and |mu| < 0.023 degrees for rotations. The precision sigma in x-, y-, and z-direction was 0.090, 0.077, and 0.220 mm for translations and 0.155 degrees , 0.243 degrees , and 0.074 degrees for rotations. Our results show that the accuracy and precision of in vitro IBRSA, performed under ideal laboratory conditions, are lower than in vitro standard RSA but higher than in vivo standard RSA. Because IBRSA does not require radiopaque markers, it adds functionality to the RSA method by opening new directions and possibilities for research, such as dynamic analyses using fluoroscopy on subjects without markers and computer navigation applications. PMID:17706656

  16. Display of travelling 3D scenes from single integral-imaging capture

    NASA Astrophysics Data System (ADS)

    Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro

    2016-06-01

    Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.

  17. A review of automated image understanding within 3D baggage computed tomography security screening.

    PubMed

    Mouton, Andre; Breckon, Toby P

    2015-01-01

    Baggage inspection is the principal safeguard against the transportation of prohibited and potentially dangerous materials at airport security checkpoints. Although traditionally performed by 2D X-ray based scanning, increasingly stringent security regulations have led to a growing demand for more advanced imaging technologies. The role of X-ray Computed Tomography is thus rapidly expanding beyond the traditional materials-based detection of explosives. The development of computer vision and image processing techniques for the automated understanding of 3D baggage-CT imagery is however, complicated by poor image resolutions, image clutter and high levels of noise and artefacts. We discuss the recent and most pertinent advancements and identify topics for future research within the challenging domain of automated image understanding for baggage security screening CT. PMID:26409422

  18. Lensfree diffractive tomography for the imaging of 3D cell cultures

    PubMed Central

    Momey, F.; Berdeu, A.; Bordy, T.; Dinten, J.-M.; Marcel, F. Kermarrec; Picollet-D’hahan, N.; Gidrol, X.; Allier, C.

    2016-01-01

    New microscopes are needed to help realize the full potential of 3D organoid culture studies. In order to image large volumes of 3D organoid cultures while preserving the ability to catch every single cell, we propose a new imaging platform based on lensfree microscopy. We have built a lensfree diffractive tomography setup performing multi-angle acquisitions of 3D organoid culture embedded in Matrigel and developed a dedicated 3D holographic reconstruction algorithm based on the Fourier diffraction theorem. With this new imaging platform, we have been able to reconstruct a 3D volume as large as 21.5 mm3 of a 3D organoid culture of prostatic RWPE1 cells showing the ability of these cells to assemble in 3D intricate cellular network at the mesoscopic scale. Importantly, comparisons with 2D images show that it is possible to resolve single cells isolated from the main cellular structure with our lensfree diffractive tomography setup. PMID:27231600

  19. Lensfree diffractive tomography for the imaging of 3D cell cultures.

    PubMed

    Momey, F; Berdeu, A; Bordy, T; Dinten, J-M; Marcel, F Kermarrec; Picollet-D'hahan, N; Gidrol, X; Allier, C

    2016-03-01

    New microscopes are needed to help realize the full potential of 3D organoid culture studies. In order to image large volumes of 3D organoid cultures while preserving the ability to catch every single cell, we propose a new imaging platform based on lensfree microscopy. We have built a lensfree diffractive tomography setup performing multi-angle acquisitions of 3D organoid culture embedded in Matrigel and developed a dedicated 3D holographic reconstruction algorithm based on the Fourier diffraction theorem. With this new imaging platform, we have been able to reconstruct a 3D volume as large as 21.5 mm (3) of a 3D organoid culture of prostatic RWPE1 cells showing the ability of these cells to assemble in 3D intricate cellular network at the mesoscopic scale. Importantly, comparisons with 2D images show that it is possible to resolve single cells isolated from the main cellular structure with our lensfree diffractive tomography setup. PMID:27231600

  20. Estimating Density Gradients and Drivers from 3D Ionospheric Imaging

    NASA Astrophysics Data System (ADS)

    Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.

    2009-12-01

    The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007

  1. Recognition Accuracy Using 3D Endoscopic Images for Superficial Gastrointestinal Cancer: A Crossover Study

    PubMed Central

    Kaise, Mitsuru; Kikuchi, Daisuke; Iizuka, Toshiro; Fukuma, Yumiko; Kuribayashi, Yasutaka; Tanaka, Masami; Toba, Takahito; Furuhata, Tsukasa; Yamashita, Satoshi; Matsui, Akira; Mitani, Toshifumi; Hoteya, Shu

    2016-01-01

    Aim. To determine whether 3D endoscopic images improved recognition accuracy for superficial gastrointestinal cancer compared with 2D images. Methods. We created an image catalog using 2D and 3D images of 20 specimens resected by endoscopic submucosal dissection. The twelve participants were allocated into two groups. Group 1 evaluated only 2D images at first, group 2 evaluated 3D images, and, after an interval of 2 weeks, group 1 next evaluated 3D and group 2 evaluated 2D images. The evaluation items were as follows: (1) diagnostic accuracy of the tumor extent and (2) confidence levels in assessing (a) tumor extent, (b) morphology, (c) microsurface structure, and (d) comprehensive recognition. Results. The use of 3D images resulted in an improvement in diagnostic accuracy in both group 1 (2D: 76.9%, 3D: 78.6%) and group 2 (2D: 79.9%, 3D: 83.6%), with no statistically significant difference. The confidence levels were higher for all items ((a) to (d)) when 3D images were used. With respect to experience, the degree of the improvement showed the following trend: novices > trainees > experts. Conclusions. By conversion into 3D images, there was a significant improvement in the diagnostic confidence level for superficial tumors, and the improvement was greater in individuals with lower endoscopic expertise. PMID:27597863

  2. Recognition Accuracy Using 3D Endoscopic Images for Superficial Gastrointestinal Cancer: A Crossover Study.

    PubMed

    Nomura, Kosuke; Kaise, Mitsuru; Kikuchi, Daisuke; Iizuka, Toshiro; Fukuma, Yumiko; Kuribayashi, Yasutaka; Tanaka, Masami; Toba, Takahito; Furuhata, Tsukasa; Yamashita, Satoshi; Matsui, Akira; Mitani, Toshifumi; Hoteya, Shu

    2016-01-01

    Aim. To determine whether 3D endoscopic images improved recognition accuracy for superficial gastrointestinal cancer compared with 2D images. Methods. We created an image catalog using 2D and 3D images of 20 specimens resected by endoscopic submucosal dissection. The twelve participants were allocated into two groups. Group 1 evaluated only 2D images at first, group 2 evaluated 3D images, and, after an interval of 2 weeks, group 1 next evaluated 3D and group 2 evaluated 2D images. The evaluation items were as follows: (1) diagnostic accuracy of the tumor extent and (2) confidence levels in assessing (a) tumor extent, (b) morphology, (c) microsurface structure, and (d) comprehensive recognition. Results. The use of 3D images resulted in an improvement in diagnostic accuracy in both group 1 (2D: 76.9%, 3D: 78.6%) and group 2 (2D: 79.9%, 3D: 83.6%), with no statistically significant difference. The confidence levels were higher for all items ((a) to (d)) when 3D images were used. With respect to experience, the degree of the improvement showed the following trend: novices > trainees > experts. Conclusions. By conversion into 3D images, there was a significant improvement in the diagnostic confidence level for superficial tumors, and the improvement was greater in individuals with lower endoscopic expertise. PMID:27597863

  3. Perineal body anatomy in living women: 3-D analysis using thin-slice magnetic resonance imaging

    PubMed Central

    Larson, Kindra A.; Yousuf, Aisha; Lewicky-Gaupp, Christina; Fenner, Dee E.; DeLancey, John O.L.

    2012-01-01

    Objective To describe a framework for visualizing the perineal body's complex anatomy using thin-slice MR imaging. Study Design Two mm-thick MR images were acquired in 11 women with normal pelvic support and no incontinence/prolapse symptoms. Anatomic structures were analyzed in axial, sagittal and coronal slices. 3-D models were generated from these images. Results Three distinct perineal body regions are visible on MRI: (1) a superficial region at the level of the vestibular bulb, (2) a mid region at the proximal end of the superficial transverse perineal muscle, and (3) a deep region at the level of the midurethra and puborectalis muscle. Structures are best visualized on axial scans while cranio-caudal relationships are appreciated on sagittal scans. The 3-D model further clarifies inter-relationships. Conclusion Advances in MR technology allow visualization of perineal body anatomy in living women and development of 3D models which enhance our understanding of its three different regions: superficial, mid and deep. PMID:21055513

  4. 3D imaging for ballistics analysis using chromatic white light sensor

    NASA Astrophysics Data System (ADS)

    Makrushin, Andrey; Hildebrandt, Mario; Dittmann, Jana; Clausing, Eric; Fischer, Robert; Vielhauer, Claus

    2012-03-01

    The novel application of sensing technology, based on chromatic white light (CWL), gives a new insight into ballistic analysis of cartridge cases. The CWL sensor uses a beam of white light to acquire highly detailed topography and luminance data simultaneously. The proposed 3D imaging system combines advantages of 3D and 2D image processing algorithms in order to automate the extraction of firearm specific toolmarks shaped on fired specimens. The most important characteristics of a fired cartridge case are the type of the breech face marking as well as size, shape and location of extractor, ejector and firing pin marks. The feature extraction algorithm normalizes the casing surface and consistently searches for the appropriate distortions on the rim and on the primer. The location of the firing pin mark in relation to the lateral scratches on the rim provides unique rotation invariant characteristics of the firearm mechanisms. Additional characteristics are the volume and shape of the firing pin mark. The experimental evaluation relies on the data set of 15 cartridge cases fired from three 9mm firearms of different manufacturers. The results show very high potential of 3D imaging systems for casing-based computer-aided firearm identification, which is prospectively going to support human expertise.

  5. Accuracy of volume measurement using 3D ultrasound and development of CT-3D US image fusion algorithm for prostate cancer radiotherapy

    SciTech Connect

    Baek, Jihye; Huh, Jangyoung; Hyun An, So; Oh, Yoonjin; Kim, Myungsoo; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena

    2013-02-15

    Purpose: To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Methods: Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Results: Volume measurement, using 3D US, shows a 2.8 {+-} 1.5% error, 4.4 {+-} 3.0% error for CT, and 3.1 {+-} 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. Conclusions: 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.

  6. 3D and 4D Seismic Imaging in the Oilfield; the state of the art

    NASA Astrophysics Data System (ADS)

    Strudley, A.

    2005-05-01

    Seismic imaging in the oilfield context has seen enormous changes over the last 20 years driven by a combination of improved subsurface illumination (2D to 3D), increased computational power and improved physical understanding. Today Kirchhoff Pre-stack migration (in time or depth) is the norm with anisotropic parameterisation and finite difference methods being increasingly employed. In the production context Time-Lapse (4D) Seismic is of growing importance as a tool for monitoring reservoir changes to facilitate increased productivity and recovery. In this paper we present an overview of state of the art technology in 3D and 4D seismic and look at future trends. Pre-stack Kirchhoff migration in time or depth is the imaging tool of choice for the majority of contemporary 3D datasets. Recent developments in 3D pre-stack imaging have been focussed around finite difference solutions to the acoustic wave equation, the so-called Wave Equation Migration methods (WEM). Application of finite difference solutions to imaging is certainly not new, however 3D pre-stack migration using these schemes is a relatively recent development driven by the need for imaging complex geologic structures such as sub salt, and facilitated by increased computational resources. Finally there are a class of imaging methods referred to as beam migration. These methods may be based on either the wave equation or rays, but all operate on a localised (in space and direction) part of the wavefield. These methods offer a bridge between the computational efficiency of Kirchhoff schemes and the improved image quality of WEM methods. Just as 3D seismic has had a radical impact on the quality of the static model of the reservoir, 4D seismic is having a dramatic impact on the dynamic model. Repeat shooting of seismic surveys after a period of production (typically one to several years) reveals changes in pressure and saturation through changes in the seismic response. The growth in interest in 4D seismic

  7. Image enhancement and segmentation of fluid-filled structures in 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Chalana, Vikram; Dudycha, Stephen; McMorrow, Gerald

    2003-05-01

    Segmentation of fluid-filled structures, such as the urinary bladder, from three-dimensional ultrasound images is necessary for measuring their volume. This paper describes a system for image enhancement, segmentation and volume measurement of fluid-filled structures on 3D ultrasound images. The system was applied for the measurement of urinary bladder volume. Results show an average error of less than 10% in the estimation of the total bladder volume.

  8. High Content Imaging (HCI) on Miniaturized Three-Dimensional (3D) Cell Cultures.

    PubMed

    Joshi, Pranav; Lee, Moo-Yeal

    2015-12-01

    High content imaging (HCI) is a multiplexed cell staining assay developed for better understanding of complex biological functions and mechanisms of drug action, and it has become an important tool for toxicity and efficacy screening of drug candidates. Conventional HCI assays have been carried out on two-dimensional (2D) cell monolayer cultures, which in turn limit predictability of drug toxicity/efficacy in vivo; thus, there has been an urgent need to perform HCI assays on three-dimensional (3D) cell cultures. Although 3D cell cultures better mimic in vivo microenvironments of human tissues and provide an in-depth understanding of the morphological and functional features of tissues, they are also limited by having relatively low throughput and thus are not amenable to high-throughput screening (HTS). One attempt of making 3D cell culture amenable for HTS is to utilize miniaturized cell culture platforms. This review aims to highlight miniaturized 3D cell culture platforms compatible with current HCI technology. PMID:26694477

  9. High Content Imaging (HCI) on Miniaturized Three-Dimensional (3D) Cell Cultures

    PubMed Central

    Joshi, Pranav; Lee, Moo-Yeal

    2015-01-01

    High content imaging (HCI) is a multiplexed cell staining assay developed for better understanding of complex biological functions and mechanisms of drug action, and it has become an important tool for toxicity and efficacy screening of drug candidates. Conventional HCI assays have been carried out on two-dimensional (2D) cell monolayer cultures, which in turn limit predictability of drug toxicity/efficacy in vivo; thus, there has been an urgent need to perform HCI assays on three-dimensional (3D) cell cultures. Although 3D cell cultures better mimic in vivo microenvironments of human tissues and provide an in-depth understanding of the morphological and functional features of tissues, they are also limited by having relatively low throughput and thus are not amenable to high-throughput screening (HTS). One attempt of making 3D cell culture amenable for HTS is to utilize miniaturized cell culture platforms. This review aims to highlight miniaturized 3D cell culture platforms compatible with current HCI technology. PMID:26694477

  10. Fast algorithm of 3D median filter for medical image despeckling

    NASA Astrophysics Data System (ADS)

    Xiong, Chengyi; Hou, Jianhua; Gao, Zhirong; He, Xiang; Chen, Shaoping

    2007-12-01

    Three-dimensional (3-D) median filtering is very useful to eliminate speckle noise from a medical imaging source, such as functional magnetic resonance imaging (fMRI) and ultrasonic imaging. 3-D median filtering is characterized by its higher computation complexity. N 3(N 3-1)/2 comparison operations would be required for 3-D median filtering with N×N×N window if the conventional bubble-sorting algorithm is adopted. In this paper, an efficient fast algorithm for 3-D median filtering was presented, which considerably reduced the computation complexity for extracting the median of a 3-D data array. Compared to the state-of-the-art, the proposed method could reduce the computation complexity of 3-D median filtering by 33%. It results in efficiently reducing the system delay of the 3-D median filter by software implementation, and the system cost and power consumption by hardware implementation.

  11. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system.

    PubMed

    Baumann, Michael; Mozer, Pierre; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space and thus renders optimization efficient. The method was tested on 237 prostate volumes acquired from 14 different patients for 3D to 3D and 3D to orthogonal 2D slices registration. The 3D-3D version of the algorithm converged correctly in 96.7% of all cases in 6.5s with an accuracy of 1.41mm (r.m.s.) and 3.84mm (max). The 3D to slices method yielded a success rate of 88.9% in 2.3s with an accuracy of 1.37mm (r.m.s.) and 4.3mm (max). PMID:18044549

  12. Computation of optimized arrays for 3-D electrical imaging surveys

    NASA Astrophysics Data System (ADS)

    Loke, M. H.; Wilkinson, P. B.; Uhlemann, S. S.; Chambers, J. E.; Oxby, L. S.

    2014-12-01

    3-D electrical resistivity surveys and inversion models are required to accurately resolve structures in areas with very complex geology where 2-D models might suffer from artefacts. Many 3-D surveys use a grid where the number of electrodes along one direction (x) is much greater than in the perpendicular direction (y). Frequently, due to limitations in the number of independent electrodes in the multi-electrode system, the surveys use a roll-along system with a small number of parallel survey lines aligned along the x-direction. The `Compare R' array optimization method previously used for 2-D surveys is adapted for such 3-D surveys. Offset versions of the inline arrays used in 2-D surveys are included in the number of possible arrays (the comprehensive data set) to improve the sensitivity to structures in between the lines. The array geometric factor and its relative error are used to filter out potentially unstable arrays in the construction of the comprehensive data set. Comparisons of the conventional (consisting of dipole-dipole and Wenner-Schlumberger arrays) and optimized arrays are made using a synthetic model and experimental measurements in a tank. The tests show that structures located between the lines are better resolved with the optimized arrays. The optimized arrays also have significantly better depth resolution compared to the conventional arrays.

  13. Imaging system for creating 3D block-face cryo-images of whole mice

    NASA Astrophysics Data System (ADS)

    Roy, Debashish; Breen, Michael; Salvado, Olivier; Heinzel, Meredith; McKinley, Eliot; Wilson, David

    2006-03-01

    We developed a cryomicrotome/imaging system that provides high resolution, high sensitivity block-face images of whole mice or excised organs, and applied it to a variety of biological applications. With this cryo-imaging system, we sectioned cryo-preserved tissues at 2-40 μm thickness and acquired high resolution brightfield and fluorescence images with microscopic in-plane resolution (as good as 1.2 μm). Brightfield images of normal and pathological anatomy show exquisite detail, especially in the abdominal cavity. Multi-planar reformatting and 3D renderings allow one to interrogate 3D structures. In this report, we present brightfield images of mouse anatomy, as well as 3D renderings of organs. For BPK mice model of polycystic kidney disease, we compared brightfield cryo-images and kidney volumes to MRI. The color images provided greater contrast and resolution of cysts as compared to in vivo MRI. We note that color cryo-images are closer to what a researcher sees in dissection, making it easier for them to interpret image data. The combination of field of view, depth of field, ultra high resolution and color/fluorescence contrast enables cryo-image volumes to provide details that cannot be found through in vivo imaging or other ex vivo optical imaging approaches. We believe that this novel imaging system will have applications that include identification of mouse phenotypes, characterization of diseases like blood vessel disease, kidney disease, and cancer, assessment of drug and gene therapy delivery and efficacy and validation of other imaging modalities.

  14. 3D prostate segmentation of ultrasound images combining longitudinal image registration and machine learning

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Fei, Baowei

    2012-02-01

    We developed a three-dimensional (3D) segmentation method for transrectal ultrasound (TRUS) images, which is based on longitudinal image registration and machine learning. Using longitudinal images of each individual patient, we register previously acquired images to the new images of the same subject. Three orthogonal Gabor filter banks were used to extract texture features from each registered image. Patient-specific Gabor features from the registered images are used to train kernel support vector machines (KSVMs) and then to segment the newly acquired prostate image. The segmentation method was tested in TRUS data from five patients. The average surface distance between our and manual segmentation is 1.18 +/- 0.31 mm, indicating that our automatic segmentation method based on longitudinal image registration is feasible for segmenting the prostate in TRUS images.

  15. Integrated Interventional Devices For Real Time 3D Ultrasound Imaging and Therapy

    NASA Astrophysics Data System (ADS)

    Smith, Stephen W.; Lee, Warren; Gentry, Kenneth L.; Pua, Eric C.; Light, Edward D.

    2006-05-01

    Two recent advances have expanded the potential of medical ultrasound: the introduction of real-time 3-D ultrasound imaging with catheter, transesophageal and laparoscopic probes and the development of interventional ultrasound therapeutic systems for focused ultrasound surgery, ablation and ultrasound enhanced drug delivery. This work describes devices combining both technologies. A series of transducer probes have been designed, fabricated and tested including: 1) a 12 French side scanning catheter incorporating a 64 element matrix array for imaging at 5MHz and a piston ablation transducer operating at 10 MHz. 2) a 14 Fr forward-scanning catheter integrating a 112 element 2-D array for imaging at 5 MHz encircled by an ablation annulus operating at 10 MHz. Finite element modeling was then used to simulate catheter annular and linear phased array transducers for ablation. 3) Linear phased array transducers were built to confirm the finite element analysis at 4 and 8 MHz including a mechanically focused 86 element 9 MHz array which transmits an ISPTA of 29.3 W/cm2 and creates a lesion in 2 minutes. 4) 2-D arrays of 504 channels operating at 5 MHz have been developed for transesophageal and laparascopic 3D imaging as well as therapeutic heating. All the devices image the heart anatomy including atria, valves, septa and en face views of the pulmonary veins.

  16. The image adaptive method for solder paste 3D measurement system

    NASA Astrophysics Data System (ADS)

    Xiaohui, Li; Changku, Sun; Peng, Wang

    2015-03-01

    The extensive application of Surface Mount Technology (SMT) requires various measurement methods to evaluate the circuit board. The solder paste 3D measurement system utilizing laser light projecting on the printed circuit board (PCB) surface is one of the critical methods. The local oversaturation, arising from the non-consistent reflectivity of the PCB surface, will lead to inaccurate measurement. The paper reports a novel optical image adaptive method of remedying the local oversaturation for solder paste measurement. The liquid crystal on silicon (LCoS) and image sensor (CCD or CMOS) are combined as the high dynamic range image (HDRI) acquisition system. The significant characteristic of the new method is that the image after adjustment is captured by specially designed HDRI acquisition system programmed by the LCoS mask. The formation of the LCoS mask, depending on a HDRI combined with the image fusion algorithm, is based on separating the laser light from the local oversaturated region. Experimental results demonstrate that the method can significantly improve the accuracy for the solder paste 3D measurement system with local oversaturation.

  17. Increasing the depth of field in Multiview 3D images

    NASA Astrophysics Data System (ADS)

    Lee, Beom-Ryeol; Son, Jung-Young; Yano, Sumio; Jung, Ilkwon

    2016-06-01

    A super-multiview condition simulator which can project up to four different view images to each eye is introduced. This simulator with the image having both disparity and perspective informs that the depth of field (DOF) will be extended to more than the default DOF values as the number of simultaneously but separately projected different view images to each eye increase. The DOF range can be extended to near 2 diopters with the four simultaneous view images. However, the DOF value increments are not prominent as the image with both disparity and perspective with the image with disparity only.

  18. D3D augmented reality imaging system: proof of concept in mammography

    PubMed Central

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Purpose The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D) augmented reality”. Materials and methods A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. PMID:27563261

  19. Fast fully 3-D image reconstruction in PET using planograms.

    PubMed

    Brasse, D; Kinahan, P E; Clackdoyle, R; Defrise, M; Comtat, C; Townsend, D W

    2004-04-01

    We present a method of performing fast and accurate three-dimensional (3-D) backprojection using only Fourier transform operations for line-integral data acquired by planar detector arrays in positron emission tomography. This approach is a 3-D extension of the two-dimensional (2-D) linogram technique of Edholm. By using a special choice of parameters to index a line of response (LOR) for a pair of planar detectors, rather than the conventional parameters used to index a LOR for a circular tomograph, all the LORs passing through a point in the field of view (FOV) lie on a 2-D plane in the four-dimensional (4-D) data space. Thus, backprojection of all the LORs passing through a point in the FOV corresponds to integration of a 2-D plane through the 4-D "planogram." The key step is that the integration along a set of parallel 2-D planes through the planogram, that is, backprojection of a plane of points, can be replaced by a 2-D section through the origin of the 4-D Fourier transform of the data. Backprojection can be performed as a sequence of Fourier transform operations, for faster implementation. In addition, we derive the central-section theorem for planogram format data, and also derive a reconstruction filter for both backprojection-filtering and filtered-backprojection reconstruction algorithms. With software-based Fourier transform calculations we provide preliminary comparisons of planogram backprojection to standard 3-D backprojection and demonstrate a reduction in computation time by a factor of approximately 15. PMID:15084067

  20. Flash trajectory imaging of target 3D motion

    NASA Astrophysics Data System (ADS)

    Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang

    2011-03-01

    We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.

  1. [3D Super-resolution Reconstruction and Visualization of Pulmonary Nodules from CT Image].

    PubMed

    Wang, Bing; Fan, Xing; Yang, Ying; Tian, Xuedong; Gu, Lixu

    2015-08-01

    The aim of this study was to propose an algorithm for three-dimensional projection onto convex sets (3D POCS) to achieve super resolution reconstruction of 3D lung computer tomography (CT) images, and to introduce multi-resolution mixed display mode to make 3D visualization of pulmonary nodules. Firstly, we built the low resolution 3D images which have spatial displacement in sub pixel level between each other and generate the reference image. Then, we mapped the low resolution images into the high resolution reference image using 3D motion estimation and revised the reference image based on the consistency constraint convex sets to reconstruct the 3D high resolution images iteratively. Finally, we displayed the different resolution images simultaneously. We then estimated the performance of provided method on 5 image sets and compared them with those of 3 interpolation reconstruction methods. The experiments showed that the performance of 3D POCS algorithm was better than that of 3 interpolation reconstruction methods in two aspects, i.e., subjective and objective aspects, and mixed display mode is suitable to the 3D visualization of high resolution of pulmonary nodules. PMID:26710449

  2. Imaging open-path Fourier transform infrared spectrometer for 3D cloud profiling

    NASA Astrophysics Data System (ADS)

    Rentz Dupuis, Julia; Mansur, David J.; Vaillancourt, Robert; Carlson, David; Evans, Thomas; Schundler, Elizabeth; Todd, Lori; Mottus, Kathleen

    2009-05-01

    OPTRA is developing an imaging open-path Fourier transform infrared (I-OP-FTIR) spectrometer for 3D profiling of chemical and biological agent simulant plumes released into test ranges and chambers. An array of I-OP-FTIR instruments positioned around the perimeter of the test site, in concert with advanced spectroscopic algorithms, enables real time tomographic reconstruction of the plume. The approach is intended as a referee measurement for test ranges and chambers. This Small Business Technology Transfer (STTR) effort combines the instrumentation and spectroscopic capabilities of OPTRA, Inc. with the computed tomographic expertise of the University of North Carolina, Chapel Hill.

  3. Ultra-Compact, High-Resolution LADAR System for 3D Imaging

    NASA Technical Reports Server (NTRS)

    Xu, Jing; Gutierrez, Roman

    2009-01-01

    An eye-safe LADAR system weighs under 500 grams and has range resolution of 1 mm at 10 m. This laser uses an adjustable, tiny microelectromechanical system (MEMS) mirror that was made in SiWave to sweep laser frequency. The size of the laser device is small (70x50x13 mm). The LADAR uses all the mature fiber-optic telecommunication technologies in the system, making this innovation an efficient performer. The tiny size and light weight makes the system useful for commercial and industrial applications including surface damage inspections, range measurements, and 3D imaging.

  4. Evaluation of the use of 3D printing and imaging to create working replica keys

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy; Kerlin, Scott

    2016-05-01

    This paper considers the efficacy of 3D scanning and printing technologies to produce duplicate keys. Duplication of keys, based on remote-sensed data represents a significant security threat, as it removes pathways to determining who illicitly gained access to a secured premises. Key to understanding the threat posed is the characterization of the easiness of gaining the required data for key production and an understanding of how well keys produced under this method work. The results of an experiment to characterize this are discussed and generalized to different key types. The effect of alternate sources of data on imaging requirements is considered.

  5. Evaluation of the use of 3D printing and imaging to create working replica keys

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy; Kerlin, Scott

    2016-06-01

    This paper considers the efficacy of 3D scanning and printing technologies to produce duplicate keys. Duplication of keys, based on remote-sensed data represents a significant security threat, as it removes pathways to determining who illicitly gained access to a secured premises. Key to understanding the threat posed is the characterization of the easiness of gaining the required data for key production and an understanding of how well keys produced under this method work. The results of an experiment to characterize this are discussed and generalized to different key types. The effect of alternate sources of data on imaging requirements is considered.

  6. CBCT-based 3D MRA and angiographic image fusion and MRA image navigation for neuro interventions.

    PubMed

    Zhang, Qiang; Zhang, Zhiqiang; Yang, Jiakang; Sun, Qi; Luo, Yongchun; Shan, Tonghui; Zhang, Hao; Han, Jingfeng; Liang, Chunyang; Pan, Wenlong; Gu, Chuanqi; Mao, Gengsheng; Xu, Ruxiang

    2016-08-01

    Digital subtracted angiography (DSA) remains the gold standard for diagnosis of cerebral vascular diseases and provides intraprocedural guidance. This practice involves extensive usage of x-ray and iodinated contrast medium, which can induce side effects. In this study, we examined the accuracy of 3-dimensional (3D) registration of magnetic resonance angiography (MRA) and DSA imaging for cerebral vessels, and tested the feasibility of using preprocedural MRA for real-time guidance during endovascular procedures.Twenty-three patients with suspected intracranial arterial lesions were enrolled. The contrast medium-enhanced 3D DSA of target vessels were acquired in 19 patients during endovascular procedures, and the images were registered with preprocedural MRA for fusion accuracy evaluation. Low-dose noncontrasted 3D angiography of the skull was performed in the other 4 patients, and registered with the MRA. The MRA was overlaid afterwards with 2D live fluoroscopy to guide endovascular procedures.The 3D registration of the MRA and angiography demonstrated a high accuracy for vessel lesion visualization in all 19 patients examined. Moreover, MRA of the intracranial vessels, registered to the noncontrasted 3D angiography in the 4 patients, provided real-time 3D roadmap to successfully guide the endovascular procedures. Radiation dose to patients and contrast medium usage were shown to be significantly reduced.Three-dimensional MRA and angiography fusion can accurately generate cerebral vasculature images to guide endovascular procedures. The use of the fusion technology could enhance clinical workflow while minimizing contrast medium usage and radiation dose, and hence lowering procedure risks and increasing treatment safety. PMID:27512846

  7. CBCT-based 3D MRA and angiographic image fusion and MRA image navigation for neuro interventions

    PubMed Central

    Zhang, Qiang; Zhang, Zhiqiang; Yang, Jiakang; Sun, Qi; Luo, Yongchun; Shan, Tonghui; Zhang, Hao; Han, Jingfeng; Liang, Chunyang; Pan, Wenlong; Gu, Chuanqi; Mao, Gengsheng; Xu, Ruxiang

    2016-01-01

    Abstract Digital subtracted angiography (DSA) remains the gold standard for diagnosis of cerebral vascular diseases and provides intraprocedural guidance. This practice involves extensive usage of x-ray and iodinated contrast medium, which can induce side effects. In this study, we examined the accuracy of 3-dimensional (3D) registration of magnetic resonance angiography (MRA) and DSA imaging for cerebral vessels, and tested the feasibility of using preprocedural MRA for real-time guidance during endovascular procedures. Twenty-three patients with suspected intracranial arterial lesions were enrolled. The contrast medium-enhanced 3D DSA of target vessels were acquired in 19 patients during endovascular procedures, and the images were registered with preprocedural MRA for fusion accuracy evaluation. Low-dose noncontrasted 3D angiography of the skull was performed in the other 4 patients, and registered with the MRA. The MRA was overlaid afterwards with 2D live fluoroscopy to guide endovascular procedures. The 3D registration of the MRA and angiography demonstrated a high accuracy for vessel lesion visualization in all 19 patients examined. Moreover, MRA of the intracranial vessels, registered to the noncontrasted 3D angiography in the 4 patients, provided real-time 3D roadmap to successfully guide the endovascular procedures. Radiation dose to patients and contrast medium usage were shown to be significantly reduced. Three-dimensional MRA and angiography fusion can accurately generate cerebral vasculature images to guide endovascular procedures. The use of the fusion technology could enhance clinical workflow while minimizing contrast medium usage and radiation dose, and hence lowering procedure risks and increasing treatment safety. PMID:27512846

  8. A comparison of the 3D kinematic measurements obtained by single-plane 2D-3D image registration and RSA.

    PubMed

    Muhit, Abdullah A; Pickering, Mark R; Ward, Tom; Scarvell, Jennie M; Smith, Paul N

    2010-01-01

    3D computed tomography (CT) to single-plane 2D fluoroscopy registration is an emerging technology for many clinical applications such as kinematic analysis of human joints and image-guided surgery. However, previous registration approaches have suffered from the inaccuracy of determining precise motion parameters for out-of-plane movements. In this paper we compare kinematic measurements obtained by a new 2D-3D registration algorithm with measurements provided by the gold standard Roentgen Stereo Analysis (RSA). In particular, we are interested in the out-of-plane translation and rotations which are difficult to measure precisely using a single plane approach. Our experimental results show that the standard deviation of the error for out-of-plane translation is 0.42 mm which compares favourably to RSA. It is also evident that our approach produces very similar flexion/extension, abduction/adduction and external knee rotation angles when compared to RSA. PMID:21097358

  9. 3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles

    NASA Astrophysics Data System (ADS)

    Doerschuk, Peter C.; Johnson, John E.

    2000-11-01

    A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.

  10. 3D image display of fetal ultrasonic images by thin shell

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

    1999-05-01

    Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

  11. Infrared imaging of the polymer 3D-printing process

    NASA Astrophysics Data System (ADS)

    Dinwiddie, Ralph B.; Kunc, Vlastimil; Lindal, John M.; Post, Brian; Smith, Rachel J.; Love, Lonnie; Duty, Chad E.

    2014-05-01

    Both mid-wave and long-wave IR cameras are used to measure various temperature profiles in thermoplastic parts as they are printed. Two significantly different 3D-printers are used in this study. The first is a small scale commercially available Solidoodle 3 printer, which prints parts with layer thicknesses on the order of 125μm. The second printer used is a "Big Area Additive Manufacturing" (BAAM) 3D-printer developed at Oak Ridge National Laboratory. The BAAM prints parts with a layer thicknesses of 4.06 mm. Of particular interest is the temperature of the previously deposited layer as the new hot layer is about to be extruded onto it. The two layers are expected have a stronger bond if the temperature of the substrate layer is above the glass transition temperature. This paper describes the measurement technique and results for a study of temperature decay and substrate layer temperature for ABS thermoplastic with and without the addition of chopped carbon fibers.

  12. Quality Prediction of Asymmetrically Distorted Stereoscopic 3D Images.

    PubMed

    Wang, Jiheng; Rehman, Abdul; Zeng, Kai; Wang, Shiqi; Wang, Zhou

    2015-11-01

    Objective quality assessment of distorted stereoscopic images is a challenging problem, especially when the distortions in the left and right views are asymmetric. Existing studies suggest that simply averaging the quality of the left and right views well predicts the quality of symmetrically distorted stereoscopic images, but generates substantial prediction bias when applied to asymmetrically distorted stereoscopic images. In this paper, we first build a database that contains both single-view and symmetrically and asymmetrically distorted stereoscopic images. We then carry out a subjective test, where we find that the quality prediction bias of the asymmetrically distorted images could lean toward opposite directions (overestimate or underestimate), depending on the distortion types and levels. Our subjective test also suggests that eye dominance effect does not have strong impact on the visual quality decisions of stereoscopic images. Furthermore, we develop an information content and divisive normalization-based pooling scheme that improves upon structural similarity in estimating the quality of single-view images. Finally, we propose a binocular rivalry-inspired multi-scale model to predict the quality of stereoscopic images from that of the single-view images. Our results show that the proposed model, without explicitly identifying image distortion types, successfully eliminates the prediction bias, leading to significantly improved quality prediction of the stereoscopic images. PMID:26087491

  13. 3D fluorescence anisotropy imaging using selective plane illumination microscopy.

    PubMed

    Hedde, Per Niklas; Ranjit, Suman; Gratton, Enrico

    2015-08-24

    Fluorescence anisotropy imaging is a popular method to visualize changes in organization and conformation of biomolecules within cells and tissues. In such an experiment, depolarization effects resulting from differences in orientation, proximity and rotational mobility of fluorescently labeled molecules are probed with high spatial resolution. Fluorescence anisotropy is typically imaged using laser scanning and epifluorescence-based approaches. Unfortunately, those techniques are limited in either axial resolution, image acquisition speed, or by photobleaching. In the last decade, however, selective plane illumination microscopy has emerged as the preferred choice for three-dimensional time lapse imaging combining axial sectioning capability with fast, camera-based image acquisition, and minimal light exposure. We demonstrate how selective plane illumination microscopy can be utilized for three-dimensional fluorescence anisotropy imaging of live cells. We further examined the formation of focal adhesions by three-dimensional time lapse anisotropy imaging of CHO-K1 cells expressing an EGFP-paxillin fusion protein. PMID:26368202

  14. 3D fluorescence anisotropy imaging using selective plane illumination microscopy

    PubMed Central

    Hedde, Per Niklas; Ranjit, Suman; Gratton, Enrico

    2015-01-01

    Fluorescence anisotropy imaging is a popular method to visualize changes in organization and conformation of biomolecules within cells and tissues. In such an experiment, depolarization effects resulting from differences in orientation, proximity and rotational mobility of fluorescently labeled molecules are probed with high spatial resolution. Fluorescence anisotropy is typically imaged using laser scanning and epifluorescence-based approaches. Unfortunately, those techniques are limited in either axial resolution, image acquisition speed, or by photobleaching. In the last decade, however, selective plane illumination microscopy has emerged as the preferred choice for three-dimensional time lapse imaging combining axial sectioning capability with fast, camera-based image acquisition, and minimal light exposure. We demonstrate how selective plane illumination microscopy can be utilized for three-dimensional fluorescence anisotropy imaging of live cells. We further examined the formation of focal adhesions by three-dimensional time lapse anisotropy imaging of CHO-K1 cells expressing an EGFP-paxillin fusion protein. PMID:26368202

  15. Multi-layer 3D imaging using a few viewpoint images and depth map

    NASA Astrophysics Data System (ADS)

    Suginohara, Hidetsugu; Sakamoto, Hirotaka; Yamanaka, Satoshi; Suyama, Shiro; Yamamoto, Hirotsugu

    2015-03-01

    In this paper, we propose a new method that makes multi-layer images from a few viewpoint images to display a 3D image by the autostereoscopic display that has multiple display screens in the depth direction. We iterate simple "Shift and Subtraction" processes to make each layer image alternately. The image made in accordance with depth map like a volume slicing by gradations is used as the initial solution of iteration process. Through the experiments using the prototype stacked two LCDs, we confirmed that it was enough to make multi-layer images from three viewpoint images to display a 3D image. Limiting the number of viewpoint images, the viewing area that allows stereoscopic view becomes narrow. To broaden the viewing area, we track the head motion of the viewer and update screen images in real time so that the viewer can maintain correct stereoscopic view within +/- 20 degrees area. In addition, we render pseudo multiple viewpoint images using depth map, then we can generate motion parallax at the same time.

  16. Simulations of Aperture Synthesis Imaging Radar for the EISCAT_3D Project

    NASA Astrophysics Data System (ADS)

    La Hoz, C.; Belyey, V.

    2012-12-01

    EISCAT_3D is a project to build the next generation of incoherent scatter radars endowed with multiple 3-dimensional capabilities that will replace the current EISCAT radars in Northern Scandinavia. Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. To demonstrate the feasibility of the antenna configurations and the imaging inversion algorithms a simulation of synthetic incoherent scattering data has been performed. The simulation algorithm incorporates the ability to control the background plasma parameters with non-homogeneous, non-stationary components over an extended 3-dimensional space. Control over the positions of a number of separated receiving antennas, their signal-to-noise-ratios and arriving phases allows realistic simulation of a multi-baseline interferometric imaging radar system. The resulting simulated data is fed into various inversion algorithms. This simulation package is a powerful tool to evaluate various antenna configurations and inversion algorithms. Results applied to realistic design alternatives of EISCAT_3D will be described.

  17. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability. PMID:25465067

  18. Multimodal, 3D pathology-mimicking bladder phantom for evaluation of cystoscopic technologies (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Smith, Gennifer T.; Lurie, Kristen L.; Zlatev, Dimitar V.; Liao, Joseph C.; Ellerbee, Audrey K.

    2016-02-01

    Optical coherence tomography (OCT) and blue light cystoscopy (BLC) have shown significant potential as complementary technologies to traditional white light cystoscopy (WLC) for early bladder cancer detection. Three-dimensional (3D) organ-mimicking phantoms provide realistic imaging environments for testing new technology designs, the diagnostic potential of systems, and novel image processing algorithms prior to validation in real tissue. Importantly, the phantom should mimic features of healthy and diseased tissue as they appear under WLC, BLC, and OCT, which are sensitive to tissue color and structure, fluorescent contrast, and optical scattering of subsurface layers, respectively. We present a phantom posing the hollow shape of the bladder and fabricated using a combination of 3D-printing and spray-coating with Dragon Skin (DS) (Smooth-On Inc.), a highly elastic polymer to mimic the layered structure of the bladder. Optical scattering of DS was tuned by addition of titanium dioxide, resulting in scattering coefficients sufficient to cover the human bladder range (0.49 to 2.0 mm^-1). Mucosal vasculature and tissue coloration were mimicked with elastic cord and red dye, respectively. Urethral access was provided through a small hole excised from the base of the phantom. Inserted features of bladder pathology included altered tissue color (WLC), fluorescence emission (BLC), and variations in layered structure (OCT). The phantom surface and underlying material were assessed on the basis of elasticity, optical scattering, layer thicknesses, and qualitative image appearance. WLC, BLC, and OCT images of normal and cancerous features in the phantom qualitatively matched corresponding images from human bladders.

  19. 3-D Target Location from Stereoscopic SAR Images

    SciTech Connect

    DOERRY,ARMIN W.

    1999-10-01

    SAR range-Doppler images are inherently 2-dimensional. Targets with a height offset lay over onto offset range and azimuth locations. Just which image locations are laid upon depends on the imaging geometry, including depression angle, squint angle, and target bearing. This is the well known layover phenomenon. Images formed with different aperture geometries will exhibit different layover characteristics. These differences can be exploited to ascertain target height information, in a stereoscopic manner. Depending on the imaging geometries, height accuracy can be on the order of horizontal position accuracies, thereby rivaling the best IFSAR capabilities in fine resolution SAR images. All that is required for this to work are two distinct passes with suitably different geometries from any plain old SAR.

  20. Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner

    NASA Astrophysics Data System (ADS)

    Lee, Kisung; Kinahan, Paul E.; Fessler, Jeffrey A.; Miyaoka, Robert S.; Janes, Marie; Lewellen, Tom K.

    2004-10-01

    We present a pragmatic approach to image reconstruction for data from the micro crystal elements system (MiCES) fully 3D mouse imaging positron emission tomography (PET) scanner under construction at the University of Washington. Our approach is modelled on fully 3D image reconstruction used in clinical PET scanners, which is based on Fourier rebinning (FORE) followed by 2D iterative image reconstruction using ordered-subsets expectation-maximization (OSEM). The use of iterative methods allows modelling of physical effects (e.g., statistical noise, detector blurring, attenuation, etc), while FORE accelerates the reconstruction process by reducing the fully 3D data to a stacked set of independent 2D sinograms. Previous investigations have indicated that non-stationary detector point-spread response effects, which are typically ignored for clinical imaging, significantly impact image quality for the MiCES scanner geometry. To model the effect of non-stationary detector blurring (DB) in the FORE+OSEM(DB) algorithm, we have added a factorized system matrix to the ASPIRE reconstruction library. Initial results indicate that the proposed approach produces an improvement in resolution without an undue increase in noise and without a significant increase in the computational burden. The impact on task performance, however, remains to be evaluated.

  1. Initialisation of 3D level set for hippocampus segmentation from volumetric brain MR images

    NASA Astrophysics Data System (ADS)

    Hajiesmaeili, Maryam; Dehmeshki, Jamshid; Bagheri Nakhjavanlo, Bashir; Ellis, Tim

    2014-04-01

    Shrinkage of the hippocampus is a primary biomarker for Alzheimer's disease and can be measured through accurate segmentation of brain MR images. The paper will describe the problem of initialisation of a 3D level set algorithm for hippocampus segmentation that must cope with the some challenging characteristics, such as small size, wide range of intensities, narrow width, and shape variation. In addition, MR images require bias correction, to account for additional inhomogeneity associated with the scanner technology. Due to these inhomogeneities, using a single initialisation seed region inside the hippocampus is prone to failure. Alternative initialisation strategies are explored, such as using multiple initialisations in different sections (such as the head, body and tail) of the hippocampus. The Dice metric is used to validate our segmentation results with respect to ground truth for a dataset of 25 MR images. Experimental results indicate significant improvement in segmentation performance using the multiple initialisations techniques, yielding more accurate segmentation results for the hippocampus.

  2. Multiview image integration system for glassless 3D display

    NASA Astrophysics Data System (ADS)

    Ando, Takahisa; Mashitani, Ken; Higashino, Masahiro; Kanayama, Hideyuki; Murata, Haruhiko; Funazou, Yasuo; Sakamoto, Naohisa; Hazama, Hiroshi; Ebara, Yasuo; Koyamada, Koji

    2005-03-01

    We have developed a multi-view image integration system, which combines seven parallax video images into a single video image so that it fits the parallax barrier. The apertures of this barrier are not stripes but tiny rectangles that are arranged in the shape of stairs. Commodity hardware is used to satisfy a specification which requires that the resolution of each parallax video image is SXGA(1645×800 pixel resolution), the resulting integrated image is QUXGA-W(3840×2400 pixel resolution), and the frame rate is fifteen frames per second. The point is that the system can provide with QUXGA-W video image, which corresponds to 27MB, at 15fps, that is about 2Gbps. Using the integration system and a Liquid Crystal Display with the parallax barrier, we can enjoy an immersive live video image which supports seven viewpoints without special glasses. In addition, since the system can superimpose the CG images of the relevant seven viewpoints into the live video images, it is possible to communicate with remote users by sharing a virtual object.

  3. Efficient RPG detection in noisy 3D image data

    NASA Astrophysics Data System (ADS)

    Pipitone, Frank

    2011-06-01

    We address the automatic detection of Ambush weapons such as rocket propelled grenades (RPGs) from range data which might be derived from multiple camera stereo with textured illumination or by other means. We describe our initial work in a new project involving the efficient acquisition of 3D scene data as well as discrete point invariant techniques to perform real time search for threats to a convoy. The shapes of the jump boundaries in the scene are exploited in this paper, rather than on-surface points, due to the large error typical of depth measurement at long range and the relatively high resolution obtainable in the transverse direction. We describe examples of the generation of a novel range-scaled chain code for detecting and matching jump boundaries.

  4. Improvement of integral 3D image quality by compensating for lens position errors

    NASA Astrophysics Data System (ADS)

    Okui, Makoto; Arai, Jun; Kobayashi, Masaki; Okano, Fumio

    2004-05-01

    Integral photography (IP) or integral imaging is a way to create natural-looking three-dimensional (3-D) images with full parallax. Integral three-dimensional television (integral 3-D TV) uses a method that electronically presents 3-D images in real time based on this IP method. The key component is a lens array comprising many micro-lenses for shooting and displaying. We have developed a prototype device with about 18,000 lenses using a super-high-definition camera with 2,000 scanning lines. Positional errors of these high-precision lenses as well as the camera's lenses will cause distortions in the elemental image, which directly affect the quality of the 3-D image and the viewing area. We have devised a way to compensate for such geometrical position errors and used it for the integral 3-D TV prototype, resulting in an improvement in both viewing zone and picture quality.

  5. Reproducing 2D breast mammography images with 3D printed phantoms

    NASA Astrophysics Data System (ADS)

    Clark, Matthew; Ghammraoui, Bahaa; Badal, Andreu

    2016-03-01

    Mammography is currently the standard imaging modality used to screen women for breast abnormalities and, as a result, it is a tool of great importance for the early detection of breast cancer. Physical phantoms are commonly used as surrogates of breast tissue to evaluate some aspects of the performance of mammography systems. However, most phantoms do not reproduce the anatomic heterogeneity of real breasts. New fabrication technologies, such as 3D printing, have created the opportunity to build more complex, anatomically realistic breast phantoms that could potentially assist in the evaluation of mammography systems. The primary objective of this work is to present a simple, easily reproducible methodology to design and print 3D objects that replicate the attenuation profile observed in real 2D mammograms. The secondary objective is to evaluate the capabilities and limitations of the competing 3D printing technologies, and characterize the x-ray properties of the different materials they use. Printable phantoms can be created using the open-source code introduced in this work, which processes a raw mammography image to estimate the amount of x-ray attenuation at each pixel, and outputs a triangle mesh object that encodes the observed attenuation map. The conversion from the observed pixel gray value to a column of printed material with equivalent attenuation requires certain assumptions and knowledge of multiple imaging system parameters, such as x-ray energy spectrum, source-to-object distance, compressed breast thickness, and average breast material attenuation. A detailed description of the new software, a characterization of the printed materials using x-ray spectroscopy, and an evaluation of the realism of the sample printed phantoms are presented.

  6. Research and Technology Development for Construction of 3d Video Scenes

    NASA Astrophysics Data System (ADS)

    Khlebnikova, Tatyana A.

    2016-06-01

    For the last two decades surface information in the form of conventional digital and analogue topographic maps has been being supplemented by new digital geospatial products, also known as 3D models of real objects. It is shown that currently there are no defined standards for 3D scenes construction technologies that could be used by Russian surveying and cartographic enterprises. The issues regarding source data requirements, their capture and transferring to create 3D scenes have not been defined yet. The accuracy issues for 3D video scenes used for measuring purposes can hardly ever be found in publications. Practicability of development, research and implementation of technology for construction of 3D video scenes is substantiated by 3D video scene capability to expand the field of data analysis application for environmental monitoring, urban planning, and managerial decision problems. The technology for construction of 3D video scenes with regard to the specified metric requirements is offered. Technique and methodological background are recommended for this technology used to construct 3D video scenes based on DTM, which were created by satellite and aerial survey data. The results of accuracy estimation of 3D video scenes are presented.

  7. 3D registration through pseudo x-ray image generation.

    PubMed

    Viant, W J; Barnel, F

    2001-01-01

    Registration of a pre operative plan with the intra operative position of the patient is still a largely unsolved problem. Current techniques generally require fiducials, either artificial or anatomic, to achieve the registration solution. Invariably these fiducials require implantation and/or direct digitisation. The technique described in this paper requires no digitisation or implantation of fiducials, but instead relies on the shape and form of the anatomy through a fully automated image comparison process. A pseudo image, generated from a virtual image intensifier's view of a CT dataset, is intra operatively compared with a real x-ray image. The principle is to align the virtual with the real image intensifier. The technique is an extension to the work undertaken by Domergue [1] and based on original ideas by Weese [4]. PMID:11317805

  8. Registration of multi-view apical 3D echocardiography images

    NASA Astrophysics Data System (ADS)

    Mulder, H. W.; van Stralen, M.; van der Zwaan, H. B.; Leung, K. Y. E.; Bosch, J. G.; Pluim, J. P. W.

    2011-03-01

    Real-time three-dimensional echocardiography (RT3DE) is a non-invasive method to visualize the heart. Disadvantageously, it suffers from non-uniform image quality and a limited field of view. Image quality can be improved by fusion of multiple echocardiography images. Successful registration of the images is essential for prosperous fusion. Therefore, this study examines the performance of different methods for intrasubject registration of multi-view apical RT3DE images. A total of 14 data sets was annotated by two observers who indicated the position of the apex and four points on the mitral valve ring. These annotations were used to evaluate registration. Multi-view end-diastolic (ED) as well as end-systolic (ES) images were rigidly registered in a multi-resolution strategy. The performance of single-frame and multi-frame registration was examined. Multi-frame registration optimizes the metric for several time frames simultaneously. Furthermore, the suitability of mutual information (MI) as similarity measure was compared to normalized cross-correlation (NCC). For initialization of the registration, a transformation that describes the probe movement was obtained by manually registering five representative data sets. It was found that multi-frame registration can improve registration results with respect to single-frame registration. Additionally, NCC outperformed MI as similarity measure. If NCC was optimized in a multi-frame registration strategy including ED and ES time frames, the performance of the automatic method was comparable to that of manual registration. In conclusion, automatic registration of RT3DE images performs as good as manual registration. As registration precedes image fusion, this method can contribute to improved quality of echocardiography images.

  9. 3D registration through pseudo x-ray image generation.

    PubMed

    Domergue, G; Viant, W J

    2000-01-01

    One of the less effective processes within current Computer Assisted Surgery systems, utilizing pre-operative planning, is the registration of the plan with the intra-operative position of the patient. The technique described in this paper requires no digitisation of anatomical features or fiducial markers but instead relies on image matching between pseudo and real x-ray images generated by a virtual and a real image intensifier respectively. The technique is an extension to the work undertaken by Weese [1]. PMID:10977585

  10. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  11. Computer-generated hologram for 3D scene from multi-view images

    NASA Astrophysics Data System (ADS)

    Chang, Eun-Young; Kang, Yun-Suk; Moon, KyungAe; Ho, Yo-Sung; Kim, Jinwoong

    2013-05-01

    Recently, the computer generated hologram (CGH) calculated from real existing objects is more actively investigated to support holographic video and TV applications. In this paper, we propose a method of generating a hologram of the natural 3-D scene from multi-view images in order to provide motion parallax viewing with a suitable navigation range. After a unified 3-D point source set describing the captured 3-D scene is obtained from multi-view images, a hologram pattern supporting motion-parallax is calculated from the set using a point-based CGH method. We confirmed that 3-D scenes are faithfully reconstructed using numerical reconstruction.

  12. Real-time auto-stereoscopic visualization of 3D medical images

    NASA Astrophysics Data System (ADS)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  13. Online reconstruction of 3D magnetic particle imaging data

    NASA Astrophysics Data System (ADS)

    Knopp, T.; Hofmann, M.

    2016-06-01

    Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s‑1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.

  14. Online reconstruction of 3D magnetic particle imaging data.

    PubMed

    Knopp, T; Hofmann, M

    2016-06-01

    Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s(-1). However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time. PMID:27182668

  15. Improved 3D cellular imaging by multispectral focus assessment

    NASA Astrophysics Data System (ADS)

    Zhao, Tong; Xiong, Yizhi; Chung, Alice P.; Wachman, Elliot S.; Farkas, Daniel L.

    2005-03-01

    Biological specimens are three-dimensional structures. However, when capturing their images through a microscope, there is only one plane in the field of view that is in focus, and out-of-focus portions of the specimen affect image quality in the in-focus plane. It is well-established that the microscope"s point spread function (PSF) can be used for blur quantitation, for the restoration of real images. However, this is an ill-posed problem, with no unique solution and with high computational complexity. In this work, instead of estimating and using the PSF, we studied focus quantitation in multi-spectral image sets. A gradient map we designed was used to evaluate the sharpness degree of each pixel, in order to identify blurred areas not to be considered. Experiments with realistic multi-spectral Pap smear images showed that measurement of their sharp gradients can provide depth information roughly comparable to human perception (through a microscope), while avoiding PSF estimation. Spectrum and morphometrics-based statistical analysis for abnormal cell detection can then be implemented in an image database where the axial structure has been refined.

  16. Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.

    PubMed

    Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed

    2009-06-01

    Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm. PMID:19380272

  17. Interferometric synthetic aperture radar detection and estimation based 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Austin, Christian D.; Moses, Randolph L.

    2006-05-01

    This paper explores three-dimensional (3D) interferometric synthetic aperture radar (IFSAR) image reconstruction when multiple scattering centers and noise are present in a radar resolution cell. We introduce an IFSAR scattering model that accounts for both multiple scattering centers and noise. The problem of 3D image reconstruction is then posed as a multiple hypothesis detection and estimation problem; resolution cells containing a single scattering center are detected and the 3D location of these cells' pixels are estimated; all other pixels are rejected from the image. Detection and estimation statistics are derived using the multiple scattering center IFSAR model. A 3D image reconstruction algorithm using these statistics is then presented, and its performance is evaluated for a 3D reconstruction of a backhoe from noisy IFSAR data.

  18. Remote laboratory for phase-aided 3D microscopic imaging and metrology

    NASA Astrophysics Data System (ADS)

    Wang, Meng; Yin, Yongkai; Liu, Zeyi; He, Wenqi; Li, Boqun; Peng, Xiang

    2014-05-01

    In this paper, the establishment of a remote laboratory for phase-aided 3D microscopic imaging and metrology is presented. Proposed remote laboratory consists of three major components, including the network-based infrastructure for remote control and data management, the identity verification scheme for user authentication and management, and the local experimental system for phase-aided 3D microscopic imaging and metrology. The virtual network computer (VNC) is introduced to remotely control the 3D microscopic imaging system. Data storage and management are handled through the open source project eSciDoc. Considering the security of remote laboratory, the fingerprint is used for authentication with an optical joint transform correlation (JTC) system. The phase-aided fringe projection 3D microscope (FP-3DM), which can be remotely controlled, is employed to achieve the 3D imaging and metrology of micro objects.

  19. Imaging of human differentiated 3D neural aggregates using light sheet fluorescence microscopy

    PubMed Central

    Gualda, Emilio J.; Simão, Daniel; Pinto, Catarina; Alves, Paula M.; Brito, Catarina

    2014-01-01

    The development of three dimensional (3D) cell cultures represents a big step for the better understanding of cell behavior and disease in a more natural like environment, providing not only single but multiple cell type interactions in a complex 3D matrix, highly resembling physiological conditions. Light sheet fluorescence microscopy (LSFM) is becoming an excellent tool for fast imaging of such 3D biological structures. We demonstrate the potential of this technique for the imaging of human differentiated 3D neural aggregates in fixed and live samples, namely calcium imaging and cell death processes, showing the power of imaging modality compared with traditional microscopy. The combination of light sheet microscopy and 3D neural cultures will open the door to more challenging experiments involving drug testing at large scale as well as a better understanding of relevant biological processes in a more realistic environment. PMID:25161607

  20. 3D digital breast tomosynthesis image reconstruction using anisotropic total variation minimization.

    PubMed

    Seyyedi, Saeed; Yildirim, Isa

    2014-01-01

    This paper presents a compressed sensing based reconstruction method for 3D digital breast tomosynthesis (DBT) imaging. Algebraic reconstruction technique (ART) has been in use in DBT imaging by minimizing the isotropic total variation (TV) of the reconstructed image. The resolution in DBT differs in sagittal and axial directions which should be encountered during the TV minimization. In this study we develop a 3D anisotropic TV (ATV) minimization by considering the different resolutions in different directions. A customized 3D Shepp-logan phantom was generated to mimic a real DBT image by considering the overlapping tissue and directional resolution issues. Results of the ART, ART+3D TV and ART+3D ATV are compared using structural similarity (SSIM) diagram. PMID:25571377

  1. Application of 3D GPR attribute technology in archaeological investigations

    NASA Astrophysics Data System (ADS)

    Zhao, Wen-Ke; Tian, Gang; Wang, Bang-Bing; Shi, Zhan-Jie; Lin, Jin-Xin

    2012-06-01

    Ground penetrating radar (GPR) attribute technology has been applied to many aspects in recent years but there are very few examples in the field of archaeology. Especially how can we extract effective attributes from the two- or three-dimensional radar data so that we can map and describe numerous archaeological targets in a large cultural site? In this paper, we applied GPR attribute technology to investigate the ancient Nanzhao castle-site in Tengchong, Yunnan Province. In order to get better archaeological target (the ancient wall, the ancient kiln site, and the ancient tomb) analysis and description, we collated the GPR data by collected standardization and then put them to the seismic data processing and interpretation workstation. The data was processed, including a variety of GPR attribute extraction, analysis, and optimization and combined with the archaeological drilling data. We choose the RMS Amplitude, Average Peak Amplitude, Instantaneous Phase, and Maximum Peak Time to interpret three archaeological targets. By comparative analysis, we have clarified that we should use different attributes to interpret different archaeological targets and the results of attribute analysis after horizon tracking is much better than the results based on a time slice.

  2. Image quality of a cone beam O-arm 3D imaging system

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

    2009-02-01

    The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (512×512×0.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

  3. 3D-2D registration of cerebral angiograms: a method and evaluation on clinical images.

    PubMed

    Mitrovic, Uroš; Špiclin, Žiga; Likar, Boštjan; Pernuš, Franjo

    2013-08-01

    Endovascular image-guided interventions (EIGI) involve navigation of a catheter through the vasculature followed by application of treatment at the site of anomaly using live 2D projection images for guidance. 3D images acquired prior to EIGI are used to quantify the vascular anomaly and plan the intervention. If fused with the information of live 2D images they can also facilitate navigation and treatment. For this purpose 3D-2D image registration is required. Although several 3D-2D registration methods for EIGI achieve registration accuracy below 1 mm, their clinical application is still limited by insufficient robustness or reliability. In this paper, we propose a 3D-2D registration method based on matching a 3D vasculature model to intensity gradients of live 2D images. To objectively validate 3D-2D registration methods, we acquired a clinical image database of 10 patients undergoing cerebral EIGI and established "gold standard" registrations by aligning fiducial markers in 3D and 2D images. The proposed method had mean registration accuracy below 0.65 mm, which was comparable to tested state-of-the-art methods, and execution time below 1 s. With the highest rate of successful registrations and the highest capture range the proposed method was the most robust and thus a good candidate for application in EIGI. PMID:23649179

  4. Web-based interactive 2D/3D medical image processing and visualization software.

    PubMed

    Mahmoudi, Seyyed Ehsan; Akhondi-Asl, Alireza; Rahmani, Roohollah; Faghih-Roohi, Shahrooz; Taimouri, Vahid; Sabouri, Ahmad; Soltanian-Zadeh, Hamid

    2010-05-01

    There are many medical image processing software tools available for research and diagnosis purposes. However, most of these tools are available only as local applications. This limits the accessibility of the software to a specific machine, and thus the data and processing power of that application are not available to other workstations. Further, there are operating system and processing power limitations which prevent such applications from running on every type of workstation. By developing web-based tools, it is possible for users to access the medical image processing functionalities wherever the internet is available. In this paper, we introduce a pure web-based, interactive, extendable, 2D and 3D medical image processing and visualization application that requires no client installation. Our software uses a four-layered design consisting of an algorithm layer, web-user-interface layer, server communication layer, and wrapper layer. To compete with extendibility of the current local medical image processing software, each layer is highly independent of other layers. A wide range of medical image preprocessing, registration, and segmentation methods are implemented using open source libraries. Desktop-like user interaction is provided by using AJAX technology in the web-user-interface. For the visualization functionality of the software, the VRML standard is used to provide 3D features over the web. Integration of these technologies has allowed implementation of our purely web-based software with high functionality without requiring powerful computational resources in the client side. The user-interface is designed such that the users can select appropriate parameters for practical research and clinical studies. PMID:20022133

  5. Space Radar Image Isla Isabela in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional view of Isabela, one of the Galapagos Islands located off the western coast of Ecuador, South America. This view was constructed by overlaying a Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) image on a digital elevation map produced by TOPSAR, a prototype airborne interferometric radar which produces simultaneous image and elevation data. The vertical scale in this image is exaggerated by a factor of 1.87. The SIR-C/X-SAR image was taken on the 40th orbit of space shuttle Endeavour. The image is centered at about 0.5 degree south latitude and 91 degrees west longitude and covers an area of 75 by 60 kilometers (47 by 37 miles). The radar incidence angle at the center of the image is about 20 degrees. The western Galapagos Islands, which lie about 1,200 kilometers (750 miles)west of Ecuador in the eastern Pacific, have six active volcanoes similar to the volcanoes found in Hawaii and reflect the volcanic processes that occur where the ocean floor is created. Since the time of Charles Darwin's visit to the area in 1835, there have been more than 60 recorded eruptions on these volcanoes. This SIR-C/X-SAR image of Alcedo and Sierra Negra volcanoes shows the rougher lava flows as bright features, while ash deposits and smooth pahoehoe lava flows appear dark. Vertical exaggeration of relief is a common tool scientists use to detect relationships between structure (for example, faults, and fractures) and topography. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data

  6. Radar Imaging of Spheres in 3D using MUSIC

    SciTech Connect

    Chambers, D H; Berryman, J G

    2003-01-21

    We have shown that multiple spheres can be imaged by linear and planar EM arrays using only one component of polarization. The imaging approach involves calculating the SVD of the scattering response matrix, selecting a subset of singular values that represents noise, and evaluating the MUSIC functional. The noise threshold applied to the spectrum of singular values for optimal performance is typically around 1%. The resulting signal subspace includes more than one singular value per sphere. The presence of reflections from the ground improves height localization, even for a linear array parallel to the ground. However, the interference between direct and reflected energy modulates the field, creating periodic nulls that can obscure targets in typical images. These nulls are largely eliminated by normalizing the MUSIC functional with the broadside beam pattern of the array. The resulting images show excellent localization for 1 and 2 spheres. The performance for the 3 sphere configurations are complicated by shadowing effects and the greater range of the 3rd sphere in case 2. Two of the three spheres are easily located by MUSIC but the third is difficult to distinguish from other local maxima of the complex imaging functional. Improvement is seen when the linear array is replace with a planar array, which increases the effective aperture height. Further analysis of the singular values and their relationship to modes of scattering from the spheres, as well as better ways to exploit polarization, should improve performance. Work along these lines is currently being pursued by the authors.

  7. Multithreaded real-time 3D image processing software architecture and implementation

    NASA Astrophysics Data System (ADS)

    Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

    2011-03-01

    Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

  8. 3D Imaging of Nanoparticle Distribution in Biological Tissue by Laser-Induced Breakdown Spectroscopy

    PubMed Central

    Gimenez, Y.; Busser, B.; Trichard, F.; Kulesza, A.; Laurent, J. M.; Zaun, V.; Lux, F.; Benoit, J. M.; Panczer, G.; Dugourd, P.; Tillement, O.; Pelascini, F.; Sancey, L.; Motto-Ros, V.

    2016-01-01

    Nanomaterials represent a rapidly expanding area of research with huge potential for future medical applications. Nanotechnology indeed promises to revolutionize diagnostics, drug delivery, gene therapy, and many other areas of research. For any biological investigation involving nanomaterials, it is crucial to study the behavior of such nano-objects within tissues to evaluate both their efficacy and their toxicity. Here, we provide the first account of 3D label-free nanoparticle imaging at the entire-organ scale. The technology used is known as laser-induced breakdown spectroscopy (LIBS) and possesses several advantages such as speed of operation, ease of use and full compatibility with optical microscopy. We then used two different but complementary approaches to achieve 3D elemental imaging with LIBS: a volume reconstruction of a sliced organ and in-depth analysis. This proof-of-concept study demonstrates the quantitative imaging of both endogenous and exogenous elements within entire organs and paves the way for innumerable applications. PMID:27435424

  9. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    NASA Astrophysics Data System (ADS)

    Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.

    2015-08-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.

  10. 3-D capacitance density imaging of fluidized bed

    DOEpatents

    Fasching, George E.

    1990-01-01

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved.

  11. An Image-Based Technique for 3d Building Reconstruction Using Multi-View Uav Images

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2015-12-01

    Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs) images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM) is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.

  12. Visualization of 3D images from multiple texel images created from fused LADAR/digital imagery

    NASA Astrophysics Data System (ADS)

    Killpack, Cody C.; Budge, Scott E.

    2015-05-01

    The ability to create 3D models, using registered texel images (fused ladar and digital imagery), is an important topic in remote sensing. These models are automatically generated by matching multiple texel images into a single common reference frame. However, rendering a sequence of independently registered texel images often provides challenges. Although accurately registered, the model textures are often incorrectly overlapped and interwoven when using standard rendering techniques. Consequently, corrections must be done after all the primitives have been rendered, by determining the best texture for any viewable fragment in the model. Determining the best texture is difficult, as each texel image remains independent after registration. The depth data is not merged to form a single 3D mesh, thus eliminating the possibility of generating a fused texture atlas. It is therefore necessary to determine which textures are overlapping and how to best combine them dynamically during the render process. The best texture for a particular pixel can be defined using 3D geometric criteria, in conjunction with a real-time, view-dependent ranking algorithm. As a result, overlapping texture fragments can now be hidden, exposed, or blended according to their computed measure of reliability.

  13. A featureless approach to 3D polyhedral building modeling from aerial images.

    PubMed

    Hammoudi, Karim; Dornaika, Fadi

    2011-01-01

    This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach. PMID:22346575

  14. A Featureless Approach to 3D Polyhedral Building Modeling from Aerial Images

    PubMed Central

    Hammoudi, Karim; Dornaika, Fadi

    2011-01-01

    This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach. PMID:22346575

  15. The Impact of Web3D Technologies on Medical Education and Training

    ERIC Educational Resources Information Center

    John, Nigel W.

    2007-01-01

    This paper provides a survey of medical applications that make use of Web3D technologies, covering the period from 1995 to 2005. We assess the impact that Web3D has made on medical education and training during this time and highlight current and future trends. The applications identified are categorized into: general education tools; tools for…

  16. 3D spectral imaging system for anterior chamber metrology

    NASA Astrophysics Data System (ADS)

    Anderson, Trevor; Segref, Armin; Frisken, Grant; Frisken, Steven

    2015-03-01

    Accurate metrology of the anterior chamber of the eye is useful for a number of diagnostic and clinical applications. In particular, accurate corneal topography and corneal thickness data is desirable for fitting contact lenses, screening for diseases and monitoring corneal changes. Anterior OCT systems can be used to measure anterior chamber surfaces, however accurate curvature measurements for single point scanning systems are known to be very sensitive to patient movement. To overcome this problem we have developed a parallel 3D spectral metrology system that captures simultaneous A-scans on a 2D lateral grid. This approach enables estimates of the elevation and curvature of anterior and posterior corneal surfaces that are robust to sample movement. Furthermore, multiple simultaneous surface measurements greatly improve the ability to register consecutive frames and enable aggregate measurements over a finer lateral grid. A key element of our approach has been to exploit standard low cost optical components including lenslet arrays and a 2D sensor to provide a path towards low cost implementation. We demonstrate first prototypes based on 6 Mpixel sensor using a 250 μm pitch lenslet array with 300 sample beams to achieve an RMS elevation accuracy of 1μm with 95 dB sensitivity and a 7.0 mm range. Initial tests on Porcine eyes, model eyes and calibration spheres demonstrate the validity of the concept. With the next iteration of designs we expect to be able to achieve over 1000 simultaneous A-scans in excess of 75 frames per second.

  17. Space Radar Image of Kilauea, Hawaii in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional perspective view of a false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies -- X-band, C-band and L-band -- from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying on the space shuttle Endeavour, overlaid on a U.S. Geological Survey digital elevation map. Visible in the center of the image in blue are the summit crater (Kilauea Caldera) which contains the smaller Halemaumau Crater, and the line of collapse craters below them that form the Chain of Craters Road. The image was acquired on April 12, 1994 during orbit 52 of the space shuttle. The area shown is approximately 34 by 57 kilometers (21 by 35 miles) with the top of the image pointing toward northwest. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. The false colors are created by displaying three radar channels of different frequency. Red areas correspond to high backscatter at L-HV polarization, while green areas exhibit high backscatter at C-HV polarization. Finally, blue shows high return at X-VV polarization. Using this color scheme, the rain forest appears bright on the image, while the green areas correspond to lower vegetation. The lava flows have different colors depending on their types and are easily recognizable due to their shapes. The flows at the top of the image originated from the Mauna Loa volcano. Kilauea volcano has been almost continuously active for more than the last 11 years. Field teams that were on the ground specifically to support these radar observations report that there was vigorous surface activity about 400 meters (one-quartermile) inland from the coast. A moving lava flow about 200 meters (650 feet) in length was observed at the time of the shuttle overflight, raising the possibility that subsequent images taken during this mission will show changes in the landscape. Currently, most of the lava that is

  18. Space Radar Image of Kilauea, Hawaii in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional perspective view of a false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies -- X-band, C-band and L-band -- from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying on the space shuttle Endeavour, overlaid on a U.S. Geological Survey digital elevation map. Visible in the center of the image in blue are the summit crater (Kilauea Caldera) which contains the smaller Halemaumau Crater, and the line of collapse craters below them that form the Chain of Craters Road. The image was acquired on April 12, 1994 during orbit 52 of the space shuttle. The area shown is approximately 34 by 57 kilometers (21 by 35 miles) with the top of the image pointing toward northwest. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. The false colors are created by displaying three radar channels of different frequency. Red areas correspond to high backscatter at L-HV polarization, while green areas exhibit high backscatter at C-HV polarization. Finally, blue shows high return at X-VV polarization. Using this color scheme, the rain forest appears bright on the image, while the green areas correspond to lower vegetation. The lava flows have different colors depending on their types and are easily recognizable due to their shapes. The flows at the top of the image originated from the Mauna Loa volcano. Kilauea volcano has been almost continuously active for more than the last 11 years. Field teams that were on the ground specifically to support these radar observations report that there was vigorous surface activity about 400 meters (one-quartermile) inland from the coast. A moving lava flow about 200 meters (650 feet) in length was observed at the time of the shuttle overflight, raising the possibility that subsequent images taken during this mission will show changes in the landscape. Currently, most of the lava that is

  19. Registration and 3D visualization of large microscopy images

    NASA Astrophysics Data System (ADS)

    Mosaliganti, Kishore; Pan, Tony; Sharp, Richard; Ridgway, Randall; Iyengar, Srivathsan; Gulacy, Alexandra; Wenzel, Pamela; de Bruin, Alain; Machiraju, Raghu; Huang, Kun; Leone, Gustavo; Saltz, Joel

    2006-03-01

    Inactivation of the retinoblastoma gene in mouse embryos causes tissue infiltrations into critical sections of the placenta, which has been shown to affect fetal survivability. Our collaborators in cancer genetics are extremely interested in examining the three dimensional nature of these infiltrations given a stack of two dimensional light microscopy images. Three sets of wildtype and mutant placentas was sectioned serially and digitized using a commercial light microscopy scanner. Each individual placenta dataset consisted of approximately 1000 images totaling 700 GB in size, which were registered into a volumetric dataset using National Library of Medicine's (NIH/NLM) Insight Segmentation and Registration Toolkit (ITK). This paper describes our method for image registration to aid in volume visualization of tissue level intermixing for both wildtype and Rb - specimens. The registration process faces many challenges arising from the large image sizes, damages during sectioning, staining gradients both within and across sections, and background noise. These issues limit the direct application of standard registration techniques due to frequent convergence to local solutions. In this work, we develop a mixture of automated and semi-automated enhancements with ground-truth validation for the mutual information-based registration algorithm. Our final volume renderings clearly show tissue intermixing differences between both wildtype and Rb - specimens which are not obvious prior to registration.

  20. Determining 3-D motion and structure from image sequences

    NASA Technical Reports Server (NTRS)

    Huang, T. S.

    1982-01-01

    A method of determining three-dimensional motion and structure from two image frames is presented. The method requires eight point correspondences between the two frames, from which motion and structure parameters are determined by solving a set of eight linear equations and a singular value decomposition of a 3x3 matrix. It is shown that the solution thus obtained is unique.

  1. Task-specific evaluation of 3D image interpolation techniques

    NASA Astrophysics Data System (ADS)

    Grevera, George J.; Udupa, Jayaram K.; Miki, Yukio

    1998-06-01

    Image interpolation is an important operation that is widely used in medical imaging, image processing, and computer graphics. A variety of interpolation methods are available in the literature. However, their systematic evaluation is lacking. At a previous meeting, we presented a framework for the task independent comparison of interpolation methods based on a variety of medical image data pertaining to different parts of the human body taken from different modalities. In this new work, we present an objective, task-specific framework for evaluating interpolation techniques. The task considered is how the interpolation methods influence the accuracy of quantification of the total volume of lesions in the brain of Multiple Sclerosis (MS) patients. Sixty lesion detection experiments coming from ten patient studies, two subsampling techniques and the original data, and 3 interpolation methods is presented along with a statistical analysis of the results. This work comprises a systematic framework for the task-specific comparison of interpolation methods. Specifically, the influence of three interpolation methods in MS lesion quantification is compared.

  2. An evaluation of cine-mode 3D portal image dosimetry for Volumetric Modulated Arc Therapy

    NASA Astrophysics Data System (ADS)

    Ansbacher, W.; Swift, C.-L.; Greer, P. B.

    2010-11-01

    We investigated cine-mode portal imaging on a Varian Trilogy accelerator and found that the linearity and other dosimetric properties are sufficient for 3D dose reconstruction as used in patient-specific quality assurance for VMAT (RapidArc) treatments. We also evaluated the gantry angle label in the portal image file header as a surrogate for the true imaged angle. The precision is only just adequate for the 3D evaluation method chosen, as discrepancies of 2° were observed.

  3. GPU accelerated generation of digitally reconstructed radiographs for 2-D/3-D image registration.

    PubMed

    Dorgham, Osama M; Laycock, Stephen D; Fisher, Mark H

    2012-09-01

    Recent advances in programming languages for graphics processing units (GPUs) provide developers with a convenient way of implementing applications which can be executed on the CPU and GPU interchangeably. GPUs are becoming relatively cheap, powerful, and widely available hardware components, which can be used to perform intensive calculations. The last decade of hardware performance developments shows that GPU-based computation is progressing significantly faster than CPU-based computation, particularly if one considers the execution of highly parallelisable algorithms. Future predictions illustrate that this trend is likely to continue. In this paper, we introduce a way of accelerating 2-D/3-D image registration by developing a hybrid system which executes on the CPU and utilizes the GPU for parallelizing the generation of digitally reconstructed radiographs (DRRs). Based on the advancements of the GPU over the CPU, it is timely to exploit the benefits of many-core GPU technology by developing algorithms for DRR generation. Although some previous work has investigated the rendering of DRRs using the GPU, this paper investigates approximations which reduce the computational overhead while still maintaining a quality consistent with that needed for 2-D/3-D registration with sufficient accuracy to be clinically acceptable in certain applications of radiation oncology. Furthermore, by comparing implementations of 2-D/3-D registration on the CPU and GPU, we investigate current performance and propose an optimal framework for PC implementations addressing the rigid registration problem. Using this framework, we are able to render DRR images from a 256×256×133 CT volume in ~24 ms using an NVidia GeForce 8800 GTX and in ~2 ms using NVidia GeForce GTX 580. In addition to applications requiring fast automatic patient setup, these levels of performance suggest image-guided radiation therapy at video frame rates is technically feasible using relatively low cost PC

  4. Space Radar Image of Missoula, Montana in 3-D

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-dimensional perspective view of Missoula, Montana, created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this are useful because they show scientists the shapes of the topographic features such as mountains and valleys. This technique helps to clarify the relationships of the different types of materials on the surface detected by the radar. The view is looking north-northeast. The blue circular area at the lower left corner is a bend of the Bitterroot River just before it joins the Clark Fork, which runs through the city. Crossing the Bitterroot River is the bridge of U.S. Highway 93. Highest mountains in this image are at elevations of 2,200 meters (7,200 feet). The city is about 975 meters (3,200 feet) above sea level. The bright yellow areas are urban and suburban zones, dark brown and blue-green areas are grasslands, bright green areas are farms, light brown and purple areas are scrub and forest, and bright white and blue areas are steep rocky slopes. The two radar images were taken on successive days by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) onboard the space shuttle Endeavour in October 1994. The digital elevation map was produced using radar interferometry, a process in which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. Radar image data are draped over the topography to provide the color with the following assignments: red is L-band vertically transmitted, vertically received; green is C-band vertically transmitted, vertically received; and blue are differences seen in the L-band data between the two days. This image is centered near 46.9 degrees north latitude and 114.1 degrees west longitude. No vertical exaggeration factor has been applied to the data. SIR-C/X-SAR, a joint mission of the German, Italian and United States space agencies, is part of NASA

  5. Space Radar Image of Long Valley, California - 3D view

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-dimensional perspective view of Long Valley, California by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This view was constructed by overlaying a color composite SIR-C image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle and, which then, are compared to obtain elevation information. The data were acquired on April 13, 1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR radar instrument. The color composite radar image was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is off the image to the left. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory

  6. Space Radar Image of Long Valley, California in 3-D

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This three-dimensional perspective view of Long Valley, California was created from data taken by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This image was constructed by overlaying a color composite SIR-C radar image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. The interferometry data were acquired on April 13,1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR instrument. The color composite radar image was taken in October and was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is the large dark feature in the foreground. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are

  7. Space Radar Image of Karakax Valley, China 3-D

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This three-dimensional perspective of the remote Karakax Valley in the northern Tibetan Plateau of western China was created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this are helpful to scientists because they reveal where the slopes of the valley are cut by erosion, as well as the accumulations of gravel deposits at the base of the mountains. These gravel deposits, called alluvial fans, are a common landform in desert regions that scientists are mapping in order to learn more about Earth's past climate changes. Higher up the valley side is a clear break in the slope, running straight, just below the ridge line. This is the trace of the Altyn Tagh fault, which is much longer than California's San Andreas fault. Geophysicists are studying this fault for clues it may be able to give them about large faults. Elevations range from 4000 m (13,100 ft) in the valley to over 6000 m (19,700 ft) at the peaks of the glaciated Kun Lun mountains running from the front right towards the back. Scale varies in this perspective view, but the area is about 20 km (12 miles) wide in the middle of the image, and there is no vertical exaggeration. The two radar images were acquired on separate days during the second flight of the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeavour in October 1994. The interferometry technique provides elevation measurements of all points in the scene. The resulting digital topographic map was used to create this view, looking northwest from high over the valley. Variations in the colors can be related to gravel, sand and rock outcrops. This image is centered at 36.1 degrees north latitude, 79.2 degrees east longitude. Radar image data are draped over the topography to provide the color with the following assignments: Red is L-band vertically transmitted, vertically received; green is the average of L-band vertically transmitted

  8. Recovering 3D tumor locations from 2D bioluminescence images and registration with CT images

    NASA Astrophysics Data System (ADS)

    Huang, Xiaolei; Metaxas, Dimitris N.; Menon, Lata G.; Mayer-Kuckuk, Philipp; Bertino, Joseph R.; Banerjee, Debabrata

    2006-02-01

    In this paper, we introduce a novel and efficient algorithm for reconstructing the 3D locations of tumor sites from a set of 2D bioluminescence images which are taken by a same camera but after continually rotating the object by a small angle. Our approach requires a much simpler set up than those using multiple cameras, and the algorithmic steps in our framework are efficient and robust enough to facilitate its use in analyzing the repeated imaging of a same animal transplanted with gene marked cells. In order to visualize in 3D the structure of the tumor, we also co-register the BLI-reconstructed crude structure with detailed anatomical structure extracted from high-resolution microCT on a single platform. We present our method using both phantom studies and real studies on small animals.

  9. Determining 3D Flow Fields via Multi-camera Light Field Imaging

    PubMed Central

    Truscott, Tadd T.; Belden, Jesse; Nielson, Joseph R.; Daily, David J.; Thomson, Scott L.

    2013-01-01

    In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture 1. Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet. PMID:23486112

  10. Snapshot 3D optical coherence tomography system using image mappingspectrometry

    PubMed Central

    Nguyen, Thuc-Uyen; Pierce, Mark C; Higgins, Laura; Tkaczyk, Tomasz S

    2013-01-01

    A snapshot 3-Dimensional Optical Coherence Tomography system was developed using Image MappingSpectrometry. This system can give depth information (Z) at different spatial positions (XY) withinone camera integration time to potentially reduce motion artifact and enhance throughput. Thecurrent (x,y,λ) datacube of (85×356×117) provides a 3Dvisualization of sample with 400 μm depth and 13.4μm in transverse resolution. Axial resolution of 16.0μm can also be achieved in this proof-of-concept system. We present ananalysis of the theoretical constraints which will guide development of future systems withincreased imaging depth and improved axial and lateral resolutions. PMID:23736629

  11. 3D image guidance in radiotherapy: a feasibility study

    NASA Astrophysics Data System (ADS)

    Ebert, Matthias; Groh, Burkhard A.; Partridge, Mike; Hesse, Bernd M.; Bortfeld, Thomas

    2001-07-01

    Currently, one major research field in radiotheraphy is focused on patient setup verification and on detection of organ motion and deformation. A phantom study is performed to demonstrate the feasibility of image guidance in radiotherapy. Patient setup errors are simulated with a humanoid phantom, which is imaged using a linear accelerator and a therapy simulator to address megavoltage and kilovoltage (kV) computed tomography (CT), respectively. Projections are recorded by a flat panel imager. The various data sets of the humanoid phantom are compared by mutual information matching. The CT investigations show that the spatial resolution is better than 1.6 mm for high contrast objects. The uncertainties remaining after mutual information matching are found to be less than 1 mm for translations and 1 degree(s) for rotations. The phantom study indicates that the detection of patient setup errors as well as organ motion or deformation is possible with a high accuracy, especially if a kV X-ray tube could be attached to the linear accelerator. The presented method allows sophisticated quality assurance of beam delivery in each fraction and may even enable the use of new concepts of adaptive radiotherapy.

  12. Evaluation of a new method for stenosis quantification from 3D x-ray angiography images

    NASA Astrophysics Data System (ADS)

    Betting, Fabienne; Moris, Gilles; Knoplioch, Jerome; Trousset, Yves L.; Sureda, Francisco; Launay, Laurent

    2001-05-01

    A new method for stenosis quantification from 3D X-ray angiography images has been evaluated on both phantom and clinical data. On phantoms, for the parts larger or equal to 3 mm, the standard deviation of the measurement error has always found to be less or equal to 0.4 mm, and the maximum measurement error less than 0.17 mm. No clear relationship has been observed between the performances of the quantification method and the acquisition FoV. On clinical data, the 3D quantification method proved to be more robust to vessel bifurcations than its 3D equivalent. On a total of 15 clinical cases, the differences between 2D and 3D quantification were always less than 0.7 mm. The conclusion is that stenosis quantification from 3D X-4ay angiography images is an attractive alternative to quantification from 2D X-ray images.

  13. The application of digital medical 3D printing technology on tumor operation

    NASA Astrophysics Data System (ADS)

    Chen, Jimin; Jiang, Yijian; Li, Yangsheng

    2016-04-01

    Digital medical 3D printing technology is a new hi-tech which combines traditional medical and digital design, computer science, bio technology and 3D print technology. At the present time there are four levels application: The printed 3D model is the first and simple application. The surgery makes use of the model to plan the processing before operation. The second is customized operation tools such as implant guide. It helps doctor to operate with special tools rather than the normal medical tools. The third level application of 3D printing in medical area is to print artificial bones or teeth to implant into human body. The big challenge is the fourth level which is to print organs with 3D printing technology. In this paper we introduced an application of 3D printing technology in tumor operation. We use 3D printing to print guide for invasion operation. Puncture needles were guided by printed guide in face tumors operation. It is concluded that this new type guide is dominantly advantageous.

  14. Astigmatic multifocus microscopy enables deep 3D super-resolved imaging

    PubMed Central

    Oudjedi, Laura; Fiche, Jean-Bernard; Abrahamsson, Sara; Mazenq, Laurent; Lecestre, Aurélie; Calmon, Pierre-François; Cerf, Aline; Nöllmann, Marcelo

    2016-01-01

    We have developed a 3D super-resolution microscopy method that enables deep imaging in cells. This technique relies on the effective combination of multifocus microscopy and astigmatic 3D single-molecule localization microscopy. We describe the optical system and the fabrication process of its key element, the multifocus grating. Then, two strategies for localizing emitters with our imaging method are presented and compared with a previously described deep 3D localization algorithm. Finally, we demonstrate the performance of the method by imaging the nuclear envelope of eukaryotic cells reaching a depth of field of ~4µm. PMID:27375935

  15. Algorithm of pulmonary emphysema extraction using thoracic 3D CT images

    NASA Astrophysics Data System (ADS)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2007-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  16. Algorithm of pulmonary emphysema extraction using low dose thoracic 3D CT images

    NASA Astrophysics Data System (ADS)

    Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Omatsu, H.; Tominaga, K.; Eguchi, K.; Moriyama, N.

    2006-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to 100 thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  17. Astigmatic multifocus microscopy enables deep 3D super-resolved imaging.

    PubMed

    Oudjedi, Laura; Fiche, Jean-Bernard; Abrahamsson, Sara; Mazenq, Laurent; Lecestre, Aurélie; Calmon, Pierre-François; Cerf, Aline; Nöllmann, Marcelo

    2016-06-01

    We have developed a 3D super-resolution microscopy method that enables deep imaging in cells. This technique relies on the effective combination of multifocus microscopy and astigmatic 3D single-molecule localization microscopy. We describe the optical system and the fabrication process of its key element, the multifocus grating. Then, two strategies for localizing emitters with our imaging method are presented and compared with a previously described deep 3D localization algorithm. Finally, we demonstrate the performance of the method by imaging the nuclear envelope of eukaryotic cells reaching a depth of field of ~4µm. PMID:27375935

  18. 3D face reconstruction from limited images based on differential evolution

    NASA Astrophysics Data System (ADS)

    Wang, Qun; Li, Jiang; Asari, Vijayan K.; Karim, Mohammad A.

    2011-09-01

    3D face modeling has been one of the greatest challenges for researchers in computer graphics for many years. Various methods have been used to model the shape and texture of faces under varying illumination and pose conditions from a single given image. In this paper, we propose a novel method for the 3D face synthesis and reconstruction by using a simple and efficient global optimizer. A 3D-2D matching algorithm which employs the integration of the 3D morphable model (3DMM) and the differential evolution (DE) algorithm is addressed. In 3DMM, the estimation process of fitting shape and texture information into 2D images is considered as the problem of searching for the global minimum in a high dimensional feature space, in which optimization is apt to have local convergence. Unlike the traditional scheme used in 3DMM, DE appears to be robust against stagnation in local minima and sensitiveness to initial values in face reconstruction. Benefitting from DE's successful performance, 3D face models can be created based on a single 2D image with respect to various illuminating and pose contexts. Preliminary results demonstrate that we are able to automatically create a virtual 3D face from a single 2D image with high performance. The validation process shows that there is only an insignificant difference between the input image and the 2D face image projected by the 3D model.

  19. A Framework for 3D Vessel Analysis using Whole Slide Images of Liver Tissue Sections

    PubMed Central

    Liang, Yanhui; Wang, Fusheng; Treanor, Darren; Magee, Derek; Roberts, Nick; Teodoro, George; Zhu, Yangyang; Kong, Jun

    2015-01-01

    Three-dimensional (3D) high resolution microscopic images have high potential for improving the understanding of both normal and disease processes where structural changes or spatial relationship of disease features are significant. In this paper, we develop a complete framework applicable to 3D pathology analytical imaging, with an application to whole slide images of sequential liver slices for 3D vessel structure analysis. The analysis workflow consists of image registration, segmentation, vessel cross-section association, interpolation, and volumetric rendering. To identify biologically-meaningful correspondence across adjacent slides, we formulate a similarity function for four association cases. The optimal solution is then obtained by constrained Integer Programming. We quantitatively and qualitatively compare our vessel reconstruction results with human annotations. Validation results indicate a satisfactory concordance as measured both by region-based and distance-based metrics. These results demonstrate a promising 3D vessel analysis framework for whole slide images of liver tissue sections. PMID:27034719

  20. Mutual information as a measure of image quality for 3D dynamic lung imaging with EIT

    PubMed Central

    Crabb, M G; Davidson, J L; Little, R; Wright, P; Morgan, A R; Miller, C A; Naish, J H; Parker, G J M; Kikinis, R; McCann, H; Lionheart, W R B

    2014-01-01

    We report on a pilot study of dynamic lung electrical impedance tomography (EIT) at the University of Manchester. Low-noise EIT data at 100 frames per second (fps) were obtained from healthy male subjects during controlled breathing, followed by magnetic resonance imaging (MRI) subsequently used for spatial validation of the EIT reconstruction. The torso surface in the MR image and electrode positions obtained using MRI fiducial markers informed the construction of a 3D finite element model extruded along the caudal-distal axis of the subject. Small changes in the boundary that occur during respiration were accounted for by incorporating the sensitivity with respect to boundary shape into a robust temporal difference reconstruction algorithm. EIT and MRI images were co-registered using the open source medical imaging software, 3D Slicer. A quantitative comparison of quality of different EIT reconstructions was achieved through calculation of the mutual information with a lung-segmented MR image. EIT reconstructions using a linear shape correction algorithm reduced boundary image artefacts, yielding better contrast of the lungs, and had 10% greater mutual information compared with a standard linear EIT reconstruction. PMID:24710978

  1. 3D CARS image reconstruction and pattern recognition on SHG images

    NASA Astrophysics Data System (ADS)

    Medyukhina, Anna; Vogler, Nadine; Latka, Ines; Dietzek, Benjamin; Cicchi, Riccardo; Pavone, Francesco S.; Popp, Jürgen

    2012-06-01

    Nonlinear optical imaging techniques based e.g. on coherent anti-Stokes Raman scattering (CARS) or second-harmonic generation (SHG) show great potential for in-vivo investigations of tissue. While the microspectroscopic imaging tools are established, automized data evaluation, i.e. image pattern recognition and automized image classification, of nonlinear optical images still bares great possibilities for future developments towards an objective clinical diagnosis. This contribution details the capability of nonlinear microscopy for both 3D visualization of human tissues and automated discrimination between healthy and diseased patterns using ex-vivo human skin samples. By means of CARS image alignment we show how to obtain a quasi-3D model of a skin biopsy, which allows us to trace the tissue structure in different projections. Furthermore, the potential of automated pattern and organization recognition to distinguish between healthy and keloidal skin tissue is discussed. A first classification algorithm employs the intrinsic geometrical features of collagen, which can be efficiently visualized by SHG microscopy. The shape of the collagen pattern allows conclusions about the physiological state of the skin, as the typical wavy collagen structure of healthy skin is disturbed e.g. in keloid formation. Based on the different collagen patterns a quantitative score characterizing the collagen waviness - and hence reflecting the physiological state of the tissue - is obtained. Further, two additional scoring methods for collagen organization, respectively based on a statistical analysis of the mutual organization of fibers and on FFT, are presented.

  2. Implementation of real-time 3D image communication system using stereoscopic imaging and display scheme

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Chul; Kim, Dong-Kyu; Ko, Jung-Hwan; Kim, Eun-Soo

    2004-11-01

    In this paper, a new stereoscopic 3D imaging communication system for real-time teleconferencing application is implemented by using IEEE 1394 digital cameras, Intel Xeon server computer system and Microsoft"s DirectShow programming library and its performance is analyzed in terms of image-grabbing frame rate. In the proposed system, two-view images are captured by using two digital cameras and processed in the Intel Xeon server computer system. And then, disparity data is extracted from them and transmitted to the client system with the left image through an information network and in the recipient two-view images are reconstructed and displayed on the stereoscopic 3D display system. The program for controlling the overall system is developed using the Microsoft DirectShow SDK. From some experimental results, it is found that the proposed system can display stereoscopic images in real-time with a full-color of 16 bits and a frame rate of 15fps.

  3. 3D Synchrotron Imaging of a Directionally Solidified Ternary Eutectic

    NASA Astrophysics Data System (ADS)

    Dennstedt, Anne; Helfen, Lukas; Steinmetz, Philipp; Nestler, Britta; Ratke, Lorenz

    2016-03-01

    For the first time, the microstructure of directionally solidified ternary eutectics is visualized in three dimensions, using a high-resolution technique of X-ray tomography at the ESRF. The microstructure characterization is conducted with a photon energy, allowing to clearly discriminate the three phases Ag2Al, Al2Cu, and α-Aluminum solid solution. The reconstructed images illustrate the three-dimensional arrangement of the phases. The Ag2Al lamellae perform splitting and merging as well as nucleation and disappearing events during directional solidification.

  4. HERMES Results on the 3D Imaging of the Nucleon

    NASA Astrophysics Data System (ADS)

    Pappalardo, L. L.

    2016-02-01

    It the last decades, a formalism of transverse momentum dependent parton distribution functions (TMDs) and of generalised parton distributions (GPDs) has been developed in the context of non-perturbative QCD, opening the way for a tomographic imaging of the nucleon structure. TMDs and GPDs provide complementary three-dimensional descriptions of the nucleon structure in terms of parton densities. They thus contribute, with different approaches, to the understanding of the full phase-space distribution of partons. A selection of HERMES results sensitive to TMDs is presented.

  5. Anesthesiology training using 3D imaging and virtual reality

    NASA Astrophysics Data System (ADS)

    Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.

    1996-04-01

    Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.

  6. HERMES Results on the 3D Imaging of the Nucleon

    NASA Astrophysics Data System (ADS)

    Pappalardo, L. L.

    2016-07-01

    It the last decades, a formalism of transverse momentum dependent parton distribution functions (TMDs) and of generalised parton distributions (GPDs) has been developed in the context of non-perturbative QCD, opening the way for a tomographic imaging of the nucleon structure. TMDs and GPDs provide complementary three-dimensional descriptions of the nucleon structure in terms of parton densities. They thus contribute, with different approaches, to the understanding of the full phase-space distribution of partons. A selection of HERMES results sensitive to TMDs is presented.

  7. Segmentation of the ovine lung in 3D CT Images

    NASA Astrophysics Data System (ADS)

    Shi, Lijun; Hoffman, Eric A.; Reinhardt, Joseph M.

    2004-04-01

    Pulmonary CT images can provide detailed information about the regional structure and function of the respiratory system. Prior to any of these analyses, however, the lungs must be identified in the CT data sets. A popular animal model for understanding lung physiology and pathophysiology is the sheep. In this paper we describe a lung segmentation algorithm for CT images of sheep. The algorithm has two main steps. The first step is lung extraction, which identifies the lung region using a technique based on optimal thresholding and connected components analysis. The second step is lung separation, which separates the left lung from the right lung by identifying the central fissure using an anatomy-based method incorporating dynamic programming and a line filter algorithm. The lung segmentation algorithm has been validated by comparing our automatic method to manual analysis for five pulmonary CT datasets. The RMS error between the computer-defined and manually-traced boundary is 0.96 mm. The segmentation requires approximately 10 minutes for a 512x512x400 dataset on a PC workstation (2.40 GHZ CPU, 2.0 GB RAM), while it takes human observer approximately two hours to accomplish the same task.

  8. Registration of 3-D images using weighted geometrical features

    SciTech Connect

    Maurer, C.R. Jr.; Aboutanos, G.B.; Dawant, B.M.; Maciunas, R.J.; Fitzpatrick, J.M.

    1996-12-01

    In this paper, the authors present a weighted geometrical features (WGF) registration algorithm. Its efficacy is demonstrated by combining points and a surface. The technique is an extension of Besl and McKay`s iterative closest point (ICP) algorithm. The authors use the WGF algorithm to register X-ray computed tomography (CT) and T2-weighted magnetic resonance (MR) volume head images acquired from eleven patients that underwent craniotomies in a neurosurgical clinical trial. Each patient had five external markers attached to transcutaneous posts screwed into the outer table of the skull. The authors define registration error as the distance between positions of corresponding markers that are not used for registration. The CT and MR images are registered using fiducial points (marker positions) only, a surface only, and various weighted combinations of points and a surface. The CT surface is derived from contours corresponding to the inner surface of the skull. The MR surface is derived from contours corresponding to the cerebrospinal fluid (CSF)-dura interface. Registration using points and a surface is found to be significantly more accurate than registration using only points or a surface.

  9. and 3D TOF-SIMS Imaging for Biological Analysis

    NASA Astrophysics Data System (ADS)

    Fletcher, John S.

    Secondary ion mass spectrometry (SIMS) is an established technique in the field of surface analysis but until recently has played only a very small role in the area of biological analysis. This chapter provides an overview of the application of secondary ion mass spectrometry to the analysis of biological samples including single cells, bacteria and tissue sections. The chapter will discuss how the challenges of biological analysis by SIMS have created an impetus for the development of new technology and methodology giving improved mass resolution, spatial resolution and sensitivity.

  10. A Conceptual Design For A Spaceborne 3D Imaging Lidar

    NASA Technical Reports Server (NTRS)

    Degnan, John J.; Smith, David E. (Technical Monitor)

    2002-01-01

    First generation spaceborne altimetric approaches are not well-suited to generating the few meter level horizontal resolution and decimeter accuracy vertical (range) resolution on the global scale desired by many in the Earth and planetary science communities. The present paper discusses the major technological impediments to achieving few meter transverse resolutions globally using conventional approaches and offers a feasible conceptual design which utilizes modest power kHz rate lasers, array detectors, photon-counting multi-channel timing receivers, and dual wedge optical scanners with transmitter point-ahead correction.

  11. 3D GRASE PROPELLER: Improved Image Acquisition Technique for Arterial Spin Labeling Perfusion Imaging

    PubMed Central

    Tan, Huan; Hoge, W. Scott; Hamilton, Craig A.; Günther, Matthias; Kraft, Robert A.

    2014-01-01

    Arterial spin labeling (ASL) is a non-invasive technique that can quantitatively measure cerebral blood flow (CBF). While traditionally ASL employs 2D EPI or spiral acquisition trajectories, single-shot 3D GRASE is gaining popularity in ASL due to inherent SNR advantage and spatial coverage. However, a major limitation of 3D GRASE is through-plane blurring caused by T2 decay. A novel technique combining 3D GRASE and a PROPELLER trajectory (3DGP) is presented to minimize through-plane blurring without sacrificing perfusion sensitivity or increasing total scan time. Full brain perfusion images were acquired at a 3×3×5mm3 nominal voxel size with Q2TIPS-FAIR as the ASL preparation sequence. Data from 5 healthy subjects was acquired on a GE 1.5T scanner in less than 4 minutes per subject. While showing good agreement in CBF quantification with 3D GRASE, 3DGP demonstrated reduced through-plane blurring, improved anatomical details, high repeatability and robustness against motion, making it suitable for routine clinical use. PMID:21254211

  12. 3D object-oriented image analysis in 3D geophysical modelling: Analysing the central part of the East African Rift System

    NASA Astrophysics Data System (ADS)

    Fadel, I.; van der Meijde, M.; Kerle, N.; Lauritsen, N.

    2015-03-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D interactive modelling environment IGMAS+, and their density contrast values were calculated using an object-based inversion technique to calculate the forward signal of the objects and compare it with the measured satellite gravity. Thus, a new object-based approach was implemented to interpret and extract the 3D subsurface objects from 3D geophysical data. We also introduce a new approach to constrain the interpretation of the satellite gravity measurements that can be applied using any 3D geophysical model.

  13. Image quality improvement for a 3D structure exhibiting multiple 2D patterns and its implementation.

    PubMed

    Hirayama, Ryuji; Nakayama, Hirotaka; Shiraki, Atsushi; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2016-04-01

    A three-dimensional (3D) structure designed by our proposed algorithm can simultaneously exhibit multiple two-dimensional patterns. The 3D structure provides multiple patterns having directional characteristics by distributing the effects of the artefacts. In this study, we proposed an iterative algorithm to improve the image quality of the exhibited patterns and have verified the effectiveness of the proposed algorithm using numerical simulations. Moreover, we fabricated different 3D glass structures (an octagonal prism, a cube and a sphere) using the proposed algorithm. All 3D structures exhibit four patterns, and different patterns can be observed depending on the viewing direction. PMID:27137021

  14. Application to monitoring of tailings dam based on 3D laser scanning technology

    NASA Astrophysics Data System (ADS)

    Ren, Fang; Zhang, Aiwu

    2011-06-01

    This paper presented a new method of monitoring of tailing dam based on 3D laser scanning technology and gave the method flow of acquiring and processing the tailing dam data. Taking the measured data for example, the author analyzed the dam deformation by generating the TIN, DEM and the curvature graph, and proved that it's feasible to global monitor the tailing dam using 3D laser scanning technology from the theory and method.

  15. MO-H-19A-03: Patient Specific Bolus with 3D Printing Technology for Electron Radiotherapy

    SciTech Connect

    Zou, W; Swann, B; Siderits, R; McKenna, M; Khan, A; Yue, N; Zhang, M; Fisher, T

    2014-06-15

    Purpose: Bolus is widely used in electron radiotherapy to achieve desired dose distribution. 3D printing technologies provide clinicians with easy access to fabricate patient specific bolus accommodating patient body surface irregularities and tissue inhomogeneity. This study presents the design and the clinical workflow of 3D printed bolus for patient electron therapy in our clinic. Methods: Patient simulation CT images free of bolus were exported from treatment planning system (TPS) to an in-house developed software package. Bolus with known material properties was designed in the software package and then exported back to the TPS as a structure. Dose calculation was carried out to examine the coverage of the target. After satisfying dose distribution was achieved, the bolus structure was transferred in Standard Tessellation Language (STL) file format for the 3D printer to generate the machine codes for printing. Upon receiving printed bolus, a quick quality assurance was performed with patient resimulated with bolus in place to verify the bolus dosimetric property before treatment started. Results: A patient specific bolus for electron radiotherapy was designed and fabricated in Form 1 3D printer with methacrylate photopolymer resin. Satisfying dose distribution was achieved in patient with bolus setup. Treatment was successfully finished for one patient with the 3D printed bolus. Conclusion: The electron bolus fabrication with 3D printing technology was successfully implemented in clinic practice.

  16. Terahertz Lasers Reveal Information for 3D Images

    NASA Technical Reports Server (NTRS)

    2013-01-01

    After taking off her shoes and jacket, she places them in a bin. She then takes her laptop out of its case and places it in a separate bin. As the items move through the x-ray machine, the woman waits for a sign from security personnel to pass through the metal detector. Today, she was lucky; she did not encounter any delays. The man behind her, however, was asked to step inside a large circular tube, raise his hands above his head, and have his whole body scanned. If you have ever witnessed a full-body scan at the airport, you may have witnessed terahertz imaging. Terahertz wavelengths are located between microwave and infrared on the electromagnetic spectrum. When exposed to these wavelengths, certain materials such as clothing, thin metal, sheet rock, and insulation become transparent. At airports, terahertz radiation can illuminate guns, knives, or explosives hidden underneath a passenger s clothing. At NASA s Kennedy Space Center, terahertz wavelengths have assisted in the inspection of materials like insulating foam on the external tanks of the now-retired space shuttle. "The foam we used on the external tank was a little denser than Styrofoam, but not much," says Robert Youngquist, a physicist at Kennedy. The problem, he explains, was that "we lost a space shuttle by having a chunk of foam fall off from the external fuel tank and hit the orbiter." To uncover any potential defects in the foam covering, such as voids or air pockets, that could keep the material from staying in place, NASA employed terahertz imaging to see through the foam. For many years, the technique ensured the integrity of the material on the external tanks.

  17. Automatic 3D ultrasound calibration for image guided therapy using intramodality image registration

    NASA Astrophysics Data System (ADS)

    Schlosser, Jeffrey; Kirmizibayrak, Can; Shamdasani, Vijay; Metz, Steve; Hristov, Dimitre

    2013-11-01

    Many real time ultrasound (US) guided therapies can benefit from management of motion-induced anatomical changes with respect to a previously acquired computerized anatomy model. Spatial calibration is a prerequisite to transforming US image information to the reference frame of the anatomy model. We present a new method for calibrating 3D US volumes using intramodality image registration, derived from the ‘hand-eye’ calibration technique. The method is fully automated by implementing data rejection based on sensor displacements, automatic registration over overlapping image regions, and a self-consistency error metric evaluated continuously during calibration. We also present a novel method for validating US calibrations based on measurement of physical phantom displacements within US images. Both calibration and validation can be performed on arbitrary phantoms. Results indicate that normalized mutual information and localized cross correlation produce the most accurate 3D US registrations for calibration. Volumetric image alignment is more accurate and reproducible than point selection for validating the calibrations, yielding <1.5 mm root mean square error, a significant improvement relative to previously reported hand-eye US calibration results. Comparison of two different phantoms for calibration and for validation revealed significant differences for validation (p = 0.003) but not for calibration (p = 0.795).

  18. Automatic 3D pulmonary nodule detection in CT images: A survey.

    PubMed

    Valente, Igor Rafael S; Cortez, Paulo César; Neto, Edson Cavalcanti; Soares, José Marques; de Albuquerque, Victor Hugo C; Tavares, João Manuel R S

    2016-02-01

    This work presents a systematic review of techniques for the 3D automatic detection of pulmonary nodules in computerized-tomography (CT) images. Its main goals are to analyze the latest technology being used for the development of computational diagnostic tools to assist in the acquisition, storage and, mainly, processing and analysis of the biomedical data. Also, this work identifies the progress made, so far, evaluates the challenges to be overcome and provides an analysis of future prospects. As far as the authors know, this is the first time that a review is devoted exclusively to automated 3D techniques for the detection of pulmonary nodules from lung CT images, which makes this work of noteworthy value. The research covered the published works in the Web of Science, PubMed, Science Direct and IEEEXplore up to December 2014. Each work found that referred to automated 3D segmentation of the lungs was individually analyzed to identify its objective, methodology and results. Based on the analysis of the selected works, several studies were seen to be useful for the construction of medical diagnostic aid tools. However, there are certain aspects that still require attention such as increasing algorithm sensitivity, reducing the number of false positives, improving and optimizing the algorithm detection of different kinds of nodules with different sizes and shapes and, finally, the ability to integrate with the Electronic Medical Record Systems and Picture Archiving and Communication Systems. Based on this analysis, we can say that further research is needed to develop current techniques and that new algorithms are needed to overcome the identified drawbacks. PMID:26652979

  19. OVERALL PROCEDURES PROTOCOL AND PATIENT ENROLLMENT PROTOCOL: TESTING FEASIBILITY OF 3D ULTRASOUND DATA ACQUISITION AND RELIABILITY OF DATA RETRIEVAL FROM STORED 3D IMAGES

    EPA Science Inventory

    The purpose of this study is to examine the feasibility of collecting, transmitting,

    and analyzing 3-D ultrasound data in the context of a multi-center study of pregnant

    women. The study will also examine the reliability of measurements obtained from 3-D

    imag
    ...

  20. Hands-on guide for 3D image creation for geological purposes

    NASA Astrophysics Data System (ADS)

    Frehner, Marcel; Tisato, Nicola

    2013-04-01

    Geological structures in outcrops or hand specimens are inherently three dimensional (3D), and therefore better understandable if viewed in 3D. While 3D models can easily be created, manipulated, and looked at from all sides on the computer screen (e.g., using photogrammetry or laser scanning data), 3D visualizations for publications or conference posters are much more challenging as they have to live in a 2D-world (i.e., on a sheet of paper). Perspective 2D visualizations of 3D models do not fully transmit the "feeling and depth of the third dimension" to the audience; but this feeling is desirable for a better examination and understanding in 3D of the structure under consideration. One of the very few possibilities to generate real 3D images, which work on a 2D display, is by using so-called stereoscopic images. Stereoscopic images are two images of the same object recorded from two slightly offset viewpoints. Special glasses and techniques have to be used to make sure that one image is seen only by one eye, and the other image is seen by the other eye, which together lead to the "3D effect". Geoscientists are often familiar with such 3D images. For example, geomorphologists traditionally view stereographic orthophotos by employing a mirror-steroscope. Nowadays, petroleum-geoscientists examine high-resolution 3D seismic data sets in special 3D visualization rooms. One of the methods for generating and viewing a stereoscopic image, which does not require a high-tech viewing device, is to create a so-called anaglyph. The principle is to overlay two images saturated in red and cyan, respectively. The two images are then viewed through red-cyan-stereoscopic glasses. This method is simple and cost-effective, but has some drawbacks in preserving colors accurately. A similar method is used in 3D movies, where polarized light or shuttering techniques are used to separate the left from the right image, which allows preserving the original colors. The advantage of red

  1. Integrating airborne LiDAR dataset and photographic images towards the construction of 3D building model

    NASA Astrophysics Data System (ADS)

    Idris, R.; Latif, Z. A.; Hamid, J. R. A.; Jaafar, J.; Ahmad, M. Y.

    2014-02-01

    A 3D building model of man-made objects is an important tool for various applications such as urban planning, flood mapping and telecommunication. The reconstruction of 3D building models remains difficult. No universal algorithms exist that can extract all objects in an image successfully. At present, advances in remote se