Sample records for real 3d object

  1. Real three-dimensional objects: effects on mental rotation.

    PubMed

    Felix, Michael C; Parker, Joshua D; Lee, Charles; Gabriel, Kara I

    2011-08-01

    The current experiment investigated real three-dimensional (3D) objects with regard to performance on a mental rotation task and whether the appearance of sex differences may be mediated by experiences with spatially related activities. 40 men and 40 women were presented with alternating timed trials consisting of real-3D objects or two-dimensional illustrations of 3D objects. Sex differences in spatially related activities did not significantly influence the finding that men outperformed women on mental rotation of either stimulus type. However, on measures related to spatial activities, self-reported proficiency using maps correlated positively with performance only on trials with illustrations whereas self-reported proficiency using GPS correlated negatively with performance regardless of stimulus dimensionality. Findings may be interpreted as suggesting that rotating real-3D objects utilizes distinct but overlapping spatial skills compared to rotating two-dimensional representations of 3D objects, and real-3D objects can enhance mental rotation performance.

  2. 2D virtual texture on 3D real object with coded structured light

    NASA Astrophysics Data System (ADS)

    Molinier, Thierry; Fofi, David; Salvi, Joaquim; Gorria, Patrick

    2008-02-01

    Augmented reality is used to improve color segmentation on human body or on precious no touch artifacts. We propose a technique to project a synthesized texture on real object without contact. Our technique can be used in medical or archaeological application. By projecting a suitable set of light patterns onto the surface of a 3D real object and by capturing images with a camera, a large number of correspondences can be found and the 3D points can be reconstructed. We aim to determine these points of correspondence between cameras and projector from a scene without explicit points and normals. We then project an adjusted texture onto the real object surface. We propose a global and automatic method to virtually texture a 3D real object.

  3. Real object-based 360-degree integral-floating display using multiple depth camera

    NASA Astrophysics Data System (ADS)

    Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam

    2015-03-01

    A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.

  4. Getting a grip on reality: Grasping movements directed to real objects and images rely on dissociable neural representations.

    PubMed

    Freud, Erez; Macdonald, Scott N; Chen, Juan; Quinlan, Derek J; Goodale, Melvyn A; Culham, Jody C

    2018-01-01

    In the current era of touchscreen technology, humans commonly execute visually guided actions directed to two-dimensional (2D) images of objects. Although real, three-dimensional (3D), objects and images of the same objects share high degree of visual similarity, they differ fundamentally in the actions that can be performed on them. Indeed, previous behavioral studies have suggested that simulated grasping of images relies on different representations than actual grasping of real 3D objects. Yet the neural underpinnings of this phenomena have not been investigated. Here we used functional magnetic resonance imaging (fMRI) to investigate how brain activation patterns differed for grasping and reaching actions directed toward real 3D objects compared to images. Multivoxel Pattern Analysis (MVPA) revealed that the left anterior intraparietal sulcus (aIPS), a key region for visually guided grasping, discriminates between both the format in which objects were presented (real/image) and the motor task performed on them (grasping/reaching). Interestingly, during action planning, the representations of real 3D objects versus images differed more for grasping movements than reaching movements, likely because grasping real 3D objects involves fine-grained planning and anticipation of the consequences of a real interaction. Importantly, this dissociation was evident in the planning phase, before movement initiation, and was not found in any other regions, including motor and somatosensory cortices. This suggests that the dissociable representations in the left aIPS were not based on haptic, motor or proprioceptive feedback. Together, these findings provide novel evidence that actions, particularly grasping, are affected by the realness of the target objects during planning, perhaps because real targets require a more elaborate forward model based on visual cues to predict the consequences of real manipulation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. The effect of background and illumination on color identification of real, 3D objects.

    PubMed

    Allred, Sarah R; Olkkonen, Maria

    2013-01-01

    For the surface reflectance of an object to be a useful cue to object identity, judgments of its color should remain stable across changes in the object's environment. In 2D scenes, there is general consensus that color judgments are much more stable across illumination changes than background changes. Here we investigate whether these findings generalize to real 3D objects. Observers made color matches to cubes as we independently varied both the illumination impinging on the cube and the 3D background of the cube. As in 2D scenes, we found relatively high but imperfect stability of color judgments under an illuminant shift. In contrast to 2D scenes, we found that background had little effect on average color judgments. In addition, variability of color judgments was increased by an illuminant shift and decreased by embedding the cube within a background. Taken together, these results suggest that in real 3D scenes with ample cues to object segregation, the addition of a background may improve stability of color identification.

  6. Multiview 3D sensing and analysis for high quality point cloud reconstruction

    NASA Astrophysics Data System (ADS)

    Satnik, Andrej; Izquierdo, Ebroul; Orjesek, Richard

    2018-04-01

    Multiview 3D reconstruction techniques enable digital reconstruction of 3D objects from the real world by fusing different viewpoints of the same object into a single 3D representation. This process is by no means trivial and the acquisition of high quality point cloud representations of dynamic 3D objects is still an open problem. In this paper, an approach for high fidelity 3D point cloud generation using low cost 3D sensing hardware is presented. The proposed approach runs in an efficient low-cost hardware setting based on several Kinect v2 scanners connected to a single PC. It performs autocalibration and runs in real-time exploiting an efficient composition of several filtering methods including Radius Outlier Removal (ROR), Weighted Median filter (WM) and Weighted Inter-Frame Average filtering (WIFA). The performance of the proposed method has been demonstrated through efficient acquisition of dense 3D point clouds of moving objects.

  7. Integration of real-time 3D capture, reconstruction, and light-field display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  8. Neuromorphic Event-Based 3D Pose Estimation

    PubMed Central

    Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B.

    2016-01-01

    Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30–60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547

  9. Real-time visual tracking of less textured three-dimensional objects on mobile platforms

    NASA Astrophysics Data System (ADS)

    Seo, Byung-Kuk; Park, Jungsik; Park, Hanhoon; Park, Jong-Il

    2012-12-01

    Natural feature-based approaches are still challenging for mobile applications (e.g., mobile augmented reality), because they are feasible only in limited environments such as highly textured and planar scenes/objects, and they need powerful mobile hardware for fast and reliable tracking. In many cases where conventional approaches are not effective, three-dimensional (3-D) knowledge of target scenes would be beneficial. We present a well-established framework for real-time visual tracking of less textured 3-D objects on mobile platforms. Our framework is based on model-based tracking that efficiently exploits partially known 3-D scene knowledge such as object models and a background's distinctive geometric or photometric knowledge. Moreover, we elaborate on implementation in order to make it suitable for real-time vision processing on mobile hardware. The performance of the framework is tested and evaluated on recent commercially available smartphones, and its feasibility is shown by real-time demonstrations.

  10. Accommodation response measurements for integral 3D image

    NASA Astrophysics Data System (ADS)

    Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

    2014-03-01

    We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

  11. 3-D-Gaze-Based Robotic Grasping Through Mimicking Human Visuomotor Function for People With Motion Impairments.

    PubMed

    Li, Songpo; Zhang, Xiaoli; Webb, Jeremy D

    2017-12-01

    The goal of this paper is to achieve a novel 3-D-gaze-based human-robot-interaction modality, with which a user with motion impairment can intuitively express what tasks he/she wants the robot to do by directly looking at the object of interest in the real world. Toward this goal, we investigate 1) the technology to accurately sense where a person is looking in real environments and 2) the method to interpret the human gaze and convert it into an effective interaction modality. Looking at a specific object reflects what a person is thinking related to that object, and the gaze location contains essential information for object manipulation. A novel gaze vector method is developed to accurately estimate the 3-D coordinates of the object being looked at in real environments, and a novel interpretation framework that mimics human visuomotor functions is designed to increase the control capability of gaze in object grasping tasks. High tracking accuracy was achieved using the gaze vector method. Participants successfully controlled a robotic arm for object grasping by directly looking at the target object. Human 3-D gaze can be effectively employed as an intuitive interaction modality for robotic object manipulation. It is the first time that 3-D gaze is utilized in a real environment to command a robot for a practical application. Three-dimensional gaze tracking is promising as an intuitive alternative for human-robot interaction especially for disabled and elderly people who cannot handle the conventional interaction modalities.

  12. Real-time reliability measure-driven multi-hypothesis tracking using 2D and 3D features

    NASA Astrophysics Data System (ADS)

    Zúñiga, Marcos D.; Brémond, François; Thonnat, Monique

    2011-12-01

    We propose a new multi-target tracking approach, which is able to reliably track multiple objects even with poor segmentation results due to noisy environments. The approach takes advantage of a new dual object model combining 2D and 3D features through reliability measures. In order to obtain these 3D features, a new classifier associates an object class label to each moving region (e.g. person, vehicle), a parallelepiped model and visual reliability measures of its attributes. These reliability measures allow to properly weight the contribution of noisy, erroneous or false data in order to better maintain the integrity of the object dynamics model. Then, a new multi-target tracking algorithm uses these object descriptions to generate tracking hypotheses about the objects moving in the scene. This tracking approach is able to manage many-to-many visual target correspondences. For achieving this characteristic, the algorithm takes advantage of 3D models for merging dissociated visual evidence (moving regions) potentially corresponding to the same real object, according to previously obtained information. The tracking approach has been validated using video surveillance benchmarks publicly accessible. The obtained performance is real time and the results are competitive compared with other tracking algorithms, with minimal (or null) reconfiguration effort between different videos.

  13. Vision-based overlay of a virtual object into real scene for designing room interior

    NASA Astrophysics Data System (ADS)

    Harasaki, Shunsuke; Saito, Hideo

    2001-10-01

    In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).

  14. Cognitive Load and Attentional Demands during Objects' Position Change in Real and Digital Environments

    ERIC Educational Resources Information Center

    Zacharis, Georgios K.; Mikropoulos, Tassos Anastasios; Kalyvioti, Katerina

    2016-01-01

    Studies showed that two-dimensional (2D) and three-dimensional (3D) educational content contributes to learning. Although there were many studies with 3D stereoscopic learning environments, only a few studies reported on the differences between real, 2D, and 3D scenes, as far as cognitive load and attentional demands were concerned. We used…

  15. Combining 3D structure of real video and synthetic objects

    NASA Astrophysics Data System (ADS)

    Kim, Man-Bae; Song, Mun-Sup; Kim, Do-Kyoon

    1998-04-01

    This paper presents a new approach of combining real video and synthetic objects. The purpose of this work is to use the proposed technology in the fields of advanced animation, virtual reality, games, and so forth. Computer graphics has been used in the fields previously mentioned. Recently, some applications have added real video to graphic scenes for the purpose of augmenting the realism that the computer graphics lacks in. This approach called augmented or mixed reality can produce more realistic environment that the entire use of computer graphics. Our approach differs from the virtual reality and augmented reality in the manner that computer- generated graphic objects are combined to 3D structure extracted from monocular image sequences. The extraction of the 3D structure requires the estimation of 3D depth followed by the construction of a height map. Graphic objects are then combined to the height map. The realization of our proposed approach is carried out in the following steps: (1) We derive 3D structure from test image sequences. The extraction of the 3D structure requires the estimation of depth and the construction of a height map. Due to the contents of the test sequence, the height map represents the 3D structure. (2) The height map is modeled by Delaunay triangulation or Bezier surface and each planar surface is texture-mapped. (3) Finally, graphic objects are combined to the height map. Because 3D structure of the height map is already known, Step (3) is easily manipulated. Following this procedure, we produced an animation video demonstrating the combination of the 3D structure and graphic models. Users can navigate the realistic 3D world whose associated image is rendered on the display monitor.

  16. Real time 3D scanner: investigations and results

    NASA Astrophysics Data System (ADS)

    Nouri, Taoufik; Pflug, Leopold

    1993-12-01

    This article presents a concept of reconstruction of 3-D objects using non-invasive and touch loss techniques. The principle of this method is to display parallel interference optical fringes on an object and then to record the object under two angles of view. According to an appropriated treatment one reconstructs the 3-D object even when the object has no symmetrical plan. The 3-D surface data is available immediately in digital form for computer- visualization and for analysis software tools. The optical set-up for recording the 3-D object, the 3-D data extraction and treatment, as well as the reconstruction of the 3-D object are reported and commented on. This application is dedicated for reconstructive/cosmetic surgery, CAD, animation and research purposes.

  17. Haptic identification of objects and their depictions.

    PubMed

    Klatzky, R L; Loomis, J M; Lederman, S J; Wake, H; Fujita, N

    1993-08-01

    Haptic identification of real objects is superior to that of raised two-dimensional (2-D) depictions. Three explanations of real-object superiority were investigated: contribution of material information, contribution of 3-D shape and size, and greater potential for integration across the fingers. In Experiment 1, subjects, while wearing gloves that gently attenuated material information, haptically identified real objects that provided reduced cues to compliance, mass, and part motion. The gloves permitted exploration with free hand movement, a single outstretched finger, or five outstretched fingers. Performance decreased over these three conditions but was superior to identification of pictures of the same objects in all cases, indicating the contribution of 3-D structure and integration across the fingers. Picture performance was also better with five fingers than with one. In Experiment 2, the subjects wore open-fingered gloves, which provided them with material information. Consequently, the effect of type of exploration was substantially reduced but not eliminated. Material compensates somewhat for limited access to object structure but is not the primary basis for haptic object identification.

  18. Dual-dimensional microscopy: real-time in vivo three-dimensional observation method using high-resolution light-field microscopy and light-field display.

    PubMed

    Kim, Jonghyun; Moon, Seokil; Jeong, Youngmo; Jang, Changwon; Kim, Youngmin; Lee, Byoungho

    2018-06-01

    Here, we present dual-dimensional microscopy that captures both two-dimensional (2-D) and light-field images of an in-vivo sample simultaneously, synthesizes an upsampled light-field image in real time, and visualizes it with a computational light-field display system in real time. Compared with conventional light-field microscopy, the additional 2-D image greatly enhances the lateral resolution at the native object plane up to the diffraction limit and compensates for the image degradation at the native object plane. The whole process from capturing to displaying is done in real time with the parallel computation algorithm, which enables the observation of the sample's three-dimensional (3-D) movement and direct interaction with the in-vivo sample. We demonstrate a real-time 3-D interactive experiment with Caenorhabditis elegans. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  19. Nondestructive analysis of three-dimensional objects using a fluid displacement method

    USDA-ARS?s Scientific Manuscript database

    Quantification of three-dimensional (3-D) objects has been a real challenge in agricultural, hydrological and environmental studies. We designed and tested a method that is capable of quantifying 3-D objects using measurements of fluid displacement. The device consists of a stand that supports a mov...

  20. Plot Scale Factor Models for Standard Orthographic Views

    ERIC Educational Resources Information Center

    Osakue, Edward E.

    2007-01-01

    Geometric modeling provides graphic representations of real or abstract objects. Realistic representation requires three dimensional (3D) attributes since natural objects have three principal dimensions. CAD software gives the user the ability to construct realistic 3D models of objects, but often prints of these models must be generated on two…

  1. Augmented reality glass-free three-dimensional display with the stereo camera

    NASA Astrophysics Data System (ADS)

    Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.

  2. A real-time 3D end-to-end augmented reality system (and its representation transformations)

    NASA Astrophysics Data System (ADS)

    Tytgat, Donny; Aerts, Maarten; De Busser, Jeroen; Lievens, Sammy; Rondao Alface, Patrice; Macq, Jean-Francois

    2016-09-01

    The new generation of HMDs coming to the market is expected to enable many new applications that allow free viewpoint experiences with captured video objects. Current applications usually rely on 3D content that is manually created or captured in an offline manner. In contrast, this paper focuses on augmented reality applications that use live captured 3D objects while maintaining free viewpoint interaction. We present a system that allows live dynamic 3D objects (e.g. a person who is talking) to be captured in real-time. Real-time performance is achieved by traversing a number of representation formats and exploiting their specific benefits. For instance, depth images are maintained for fast neighborhood retrieval and occlusion determination, while implicit surfaces are used to facilitate multi-source aggregation for both geometry and texture. The result is a 3D reconstruction system that outputs multi-textured triangle meshes at real-time rates. An end-to-end system is presented that captures and reconstructs live 3D data and allows for this data to be used on a networked (AR) device. For allocating the different functional blocks onto the available physical devices, a number of alternatives are proposed considering the available computational power and bandwidth for each of the components. As we will show, the representation format can play an important role in this functional allocation and allows for a flexible system that can support a highly heterogeneous infrastructure.

  3. Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes.

    PubMed

    Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S

    2015-02-09

    A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.

  4. Creating 3D models of historical buildings using geospatial data

    NASA Astrophysics Data System (ADS)

    Alionescu, Adrian; Bǎlǎ, Alina Corina; Brebu, Floarea Maria; Moscovici, Anca-Maria

    2017-07-01

    Recently, a lot of interest has been shown to understand a real world object by acquiring its 3D images of using laser scanning technology and panoramic images. A realistic impression of geometric 3D data can be generated by draping real colour textures simultaneously captured by a colour camera images. In this context, a new concept of geospatial data acquisition has rapidly revolutionized the method of determining the spatial position of objects, which is based on panoramic images. This article describes an approach that comprises inusing terrestrial laser scanning and panoramic images captured with Trimble V10 Imaging Rover technology to enlarge the details and realism of the geospatial data set, in order to obtain 3D urban plans and virtual reality applications.

  5. Contour-based object orientation estimation

    NASA Astrophysics Data System (ADS)

    Alpatov, Boris; Babayan, Pavel

    2016-04-01

    Real-time object orientation estimation is an actual problem of computer vision nowadays. In this paper we propose an approach to estimate an orientation of objects lacking axial symmetry. Proposed algorithm is intended to estimate orientation of a specific known 3D object, so 3D model is required for learning. The proposed orientation estimation algorithm consists of 2 stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. It minimizes the training image set. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy (mean error value less than 6°) in all case studies. The real-time performance of the algorithm was also demonstrated.

  6. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  7. Refocusing-range and image-quality enhanced optical reconstruction of 3-D objects from integral images using a principal periodic δ-function array

    NASA Astrophysics Data System (ADS)

    Ai, Lingyu; Kim, Eun-Soo

    2018-03-01

    We propose a method for refocusing-range and image-quality enhanced optical reconstruction of three-dimensional (3-D) objects from integral images only by using a 3 × 3 periodic δ-function array (PDFA), which is called a principal PDFA (P-PDFA). By directly convolving the elemental image array (EIA) captured from 3-D objects with the P-PDFAs whose spatial periods correspond to each object's depth, a set of spatially-filtered EIAs (SF-EIAs) are extracted, and from which 3-D objects can be reconstructed to be refocused on their real depth. convolutional operations are performed directly on each of the minimum 3 × 3 EIs of the picked-up EIA, the capturing and refocused-depth ranges of 3-D objects can be greatly enhanced, as well as 3-D objects much improved in image quality can be reconstructed without any preprocessing operations. Through ray-optical analysis and optical experiments with actual 3-D objects, the feasibility of the proposed method has been confirmed.

  8. Collaborative Aerial-Drawing System for Supporting Co-Creative Communication

    NASA Astrophysics Data System (ADS)

    Osaki, Akihiro; Taniguchi, Hiroyuki; Miwa, Yoshiyuki

    This paper describes the collaborative augmented reality (AR) system with which multiple users can handwrite 3D lines in the air simultaneously and manipulate the lines directly in the real world. In addition, we propose a new technique for co-creative communication utilizing the 3D drawing activity. Up to now, the various 3D user interfaces have been proposed. Although most of them aim to solve the specific problems in the virtual environments, the possibility of the 3D drawing expression has not been explored yet. Accordingly, we paid special attention to the interaction with the real objects in daily life, and considered to manipulate real objects and 3D lines without any distinctions by the same action. The developed AR system consists of a stereoscopic head-mounted display, a drawing tool, 6DOF sensors measuring three-dimensional position and Euler angles, and the 3D user interface, which enables to push, grasp and pitch 3D lines directly by use of the drawing tool. Additionally users can pick up desired color from either a landscape or a virtual line through the direct interaction with this tool. For sharing 3D lines among multiple users at the same place, the distributed-type AR system has been developed that mutually sends and receives drawn data between systems. With the developed system, users can proceed to design jointly in the real space through arranging each 3D drawing by direct manipulation. Moreover, a new application to the entertainment has become possible to play sports like catch, fencing match, or the like.

  9. 15-Month-Olds’ Transfer of Learning between Touch Screen and Real-World Displays: Language Cues and Cognitive Loads

    PubMed Central

    Zack, Elizabeth; Gerhardstein, Peter; Meltzoff, Andrew N.; Barr, Rachel

    2012-01-01

    Infants have difficulty transferring information between 2D and 3D sources. The current study extends Zack et al.’s (2009) touch screen imitation task to examine whether the addition of specific language cues significantly facilitates 15-month-olds’ transfer of learning between touch screens and real-world 3D objects. The addition of two kinds of linguistic cues (object label plus verb or nonsense name) did not elevate action imitation significantly above levels observed when such language cues were not used. Language cues hindered infants’ performance in the 3D→2D direction of transfer, but only for the object label plus verb condition. The lack of a facilitative effect of language is discussed in terms of competing cognitive loads imposed by conjointly transferring information across dimensions and processing linguistic cues in an action imitation task at this age. PMID:23121508

  10. The hologram as a space of illusion

    NASA Astrophysics Data System (ADS)

    Oliveira, Rosa M.

    2013-03-01

    One of the most interesting aspects of art holography is the study of 3D holographic image. Over the centuries, artists have chased the best way to represent the third dimension as similar to reality as possible. Several steps have been given in this direction, first using perspective, then photography, and later with movies, but all of these representations of reality wouldn't reach the complete objective. The realism of a 3D representation on a 2D support (paper, canvas, celluloid) is completely overcome by holography. In spite of the fact that the holographic plate or film is also a 2D support, the holographic image is a recording of all the information of the object contained in light. Our perception doesn't need to translate the object as real. It is real. Though immaterial, the holographic image is real because it exists in light. The same parallax, the same shape. The representation is no more an imitation of reality but a replacement of the real object or scene. The space where it exists is a space of illusion and multiple objects can occupy the same place in the hologram, depending on the viewer's time and place. This introduces the fourth dimension in the hologram: time, as well as the apparent conflict between the presence and the absence of images, which is just possible in holography.

  11. Extracting 3D Parametric Curves from 2D Images of Helical Objects.

    PubMed

    Willcocks, Chris G; Jackson, Philip T G; Nelson, Carl J; Obara, Boguslaw

    2017-09-01

    Helical objects occur in medicine, biology, cosmetics, nanotechnology, and engineering. Extracting a 3D parametric curve from a 2D image of a helical object has many practical applications, in particular being able to extract metrics such as tortuosity, frequency, and pitch. We present a method that is able to straighten the image object and derive a robust 3D helical curve from peaks in the object boundary. The algorithm has a small number of stable parameters that require little tuning, and the curve is validated against both synthetic and real-world data. The results show that the extracted 3D curve comes within close Hausdorff distance to the ground truth, and has near identical tortuosity for helical objects with a circular profile. Parameter insensitivity and robustness against high levels of image noise are demonstrated thoroughly and quantitatively.

  12. Five-dimensional ultrasound system for soft tissue visualization.

    PubMed

    Deshmukh, Nishikant P; Caban, Jesus J; Taylor, Russell H; Hager, Gregory D; Boctor, Emad M

    2015-12-01

    A five-dimensional ultrasound (US) system is proposed as a real-time pipeline involving fusion of 3D B-mode data with the 3D ultrasound elastography (USE) data as well as visualization of these fused data and a real-time update capability over time for each consecutive scan. 3D B-mode data assist in visualizing the anatomy of the target organ, and 3D elastography data adds strain information. We investigate the feasibility of such a system and show that an end-to-end real-time system, from acquisition to visualization, can be developed. We present a system that consists of (a) a real-time 3D elastography algorithm based on a normalized cross-correlation (NCC) computation on a GPU; (b) real-time 3D B-mode acquisition and network transfer; (c) scan conversion of 3D elastography and B-mode volumes (if acquired by 4D wobbler probe); and (d) visualization software that fuses, visualizes, and updates 3D B-mode and 3D elastography data in real time. We achieved a speed improvement of 4.45-fold for the threaded version of the NCC-based 3D USE versus the non-threaded version. The maximum speed was 79 volumes/s for 3D scan conversion. In a phantom, we validated the dimensions of a 2.2-cm-diameter sphere scan-converted to B-mode volume. Also, we validated the 5D US system visualization transfer function and detected 1- and 2-cm spherical objects (phantom lesion). Finally, we applied the system to a phantom consisting of three lesions to delineate the lesions from the surrounding background regions of the phantom. A 5D US system is achievable with real-time performance. We can distinguish between hard and soft areas in a phantom using the transfer functions.

  13. Use of micro computed-tomography and 3D printing for reverse engineering of mouse embryo nasal capsule

    NASA Astrophysics Data System (ADS)

    Tesařová, M.; Zikmund, T.; Kaucká, M.; Adameyko, I.; Jaroš, J.; Paloušek, D.; Škaroupka, D.; Kaiser, J.

    2016-03-01

    Imaging of increasingly complex cartilage in vertebrate embryos is one of the key tasks of developmental biology. This is especially important to study shape-organizing processes during initial skeletal formation and growth. Advanced imaging techniques that are reflecting biological needs give a powerful impulse to push the boundaries of biological visualization. Recently, techniques for contrasting tissues and organs have improved considerably, extending traditional 2D imaging approaches to 3D . X-ray micro computed tomography (μCT), which allows 3D imaging of biological objects including their internal structures with a resolution in the micrometer range, in combination with contrasting techniques seems to be the most suitable approach for non-destructive imaging of embryonic developing cartilage. Despite there are many software-based ways for visualization of 3D data sets, having a real solid model of the studied object might give novel opportunities to fully understand the shape-organizing processes in the developing body. In this feasibility study we demonstrated the full procedure of creating a real 3D object of mouse embryo nasal capsule, i.e. the staining, the μCT scanning combined by the advanced data processing and the 3D printing.

  14. Gender Differences in Object Location Memory in a Real Three-Dimensional Environment

    ERIC Educational Resources Information Center

    Iachini, Tina; Sergi, Ida; Ruggiero, Gennaro; Gnisci, Augusto

    2005-01-01

    In this preliminary study we investigate gender differences in object location memory. Our purpose is to extend the results about object location memory obtained in laboratory settings to a real 3-D environment and to further distinguish the specific components involved in this kind of memory by considering the strategies adopted to perform the…

  15. Qualitative spatial logic descriptors from 3D indoor scenes to generate explanations in natural language.

    PubMed

    Falomir, Zoe; Kluth, Thomas

    2017-06-24

    The challenge of describing 3D real scenes is tackled in this paper using qualitative spatial descriptors. A key point to study is which qualitative descriptors to use and how these qualitative descriptors must be organized to produce a suitable cognitive explanation. In order to find answers, a survey test was carried out with human participants which openly described a scene containing some pieces of furniture. The data obtained in this survey are analysed, and taking this into account, the QSn3D computational approach was developed which uses a XBox 360 Kinect to obtain 3D data from a real indoor scene. Object features are computed on these 3D data to identify objects in indoor scenes. The object orientation is computed, and qualitative spatial relations between the objects are extracted. These qualitative spatial relations are the input to a grammar which applies saliency rules obtained from the survey study and generates cognitive natural language descriptions of scenes. Moreover, these qualitative descriptors can be expressed as first-order logical facts in Prolog for further reasoning. Finally, a validation study is carried out to test whether the descriptions provided by QSn3D approach are human readable. The obtained results show that their acceptability is higher than 82%.

  16. Three-dimensional simulation and auto-stereoscopic 3D display of the battlefield environment based on the particle system algorithm

    NASA Astrophysics Data System (ADS)

    Ning, Jiwei; Sang, Xinzhu; Xing, Shujun; Cui, Huilong; Yan, Binbin; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    The army's combat training is very important now, and the simulation of the real battlefield environment is of great significance. Two-dimensional information has been unable to meet the demand at present. With the development of virtual reality technology, three-dimensional (3D) simulation of the battlefield environment is possible. In the simulation of 3D battlefield environment, in addition to the terrain, combat personnel and the combat tool ,the simulation of explosions, fire, smoke and other effects is also very important, since these effects can enhance senses of realism and immersion of the 3D scene. However, these special effects are irregular objects, which make it difficult to simulate with the general geometry. Therefore, the simulation of irregular objects is always a hot and difficult research topic in computer graphics. Here, the particle system algorithm is used for simulating irregular objects. We design the simulation of the explosion, fire, smoke based on the particle system and applied it to the battlefield 3D scene. Besides, the battlefield 3D scene simulation with the glasses-free 3D display is carried out with an algorithm based on GPU 4K super-multiview 3D video real-time transformation method. At the same time, with the human-computer interaction function, we ultimately realized glasses-free 3D display of the simulated more realistic and immersed 3D battlefield environment.

  17. 3-D Object Recognition from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case studies have been conducted using a variety of point densities, terrain types and building densities. The results have been encouraging. More work is required for better processing of, for example, forested areas, buildings with sides that are not at right angles or are not straight, and single trees that impinge on buildings. Further work may also be required to ensure that the buildings extracted are of fully cartographic quality. A first version will be included in production software later in 2011. In addition to the standard geospatial applications and the UAV navigation, the results have a further advantage: since LiDAR data tends to be accurately georeferenced, the building models extracted can be used to refine image metadata whenever the same buildings appear in imagery for which the GPS/IMU values are poorer than those for the LiDAR.

  18. LivePhantom: Retrieving Virtual World Light Data to Real Environments.

    PubMed

    Kolivand, Hoshang; Billinghurst, Mark; Sunar, Mohd Shahrizal

    2016-01-01

    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.

  19. LivePhantom: Retrieving Virtual World Light Data to Real Environments

    PubMed Central

    2016-01-01

    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera’s position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems. PMID:27930663

  20. An Approach to Develop 3d Geo-Dbms Topological Operators by Re-Using Existing 2d Operators

    NASA Astrophysics Data System (ADS)

    Xu, D.; Zlatanova, S.

    2013-09-01

    Database systems are continuously extending their capabilities to store, process and analyse 3D data. Topological relationships which describe the interaction of objects in space is one of the important spatial issues. However, spatial operators for 3D objects are still insufficient. In this paper we present the development of a new 3D topological function to distinguish intersections of 3D planar polygons. The development uses existing 2D functions in the DBMS and two geometric transformations (rotation and projection). This function is tested for a real dataset to detect overlapping 3D city objects. The paper presents the algorithms and analyses the challenges. Suggestions for improvements of the current algorithm as well as possible extensions to handle more 3D topological cases are discussed at the end.

  1. 3-D Mind Maps: Placing Young Children in the Centre of Their Own Learning

    ERIC Educational Resources Information Center

    Howitt, Christine

    2009-01-01

    Three-dimensional mind maps are a highly effective tool for providing engaging, kinaesthetic and sensory experiences for young children, with real objects used to promote the sharing of knowledge and the creation of connections. The use of real objects allows children the opportunity to connect with those objects at a personal level, thus placing…

  2. Flatbed-type 3D display systems using integral imaging method

    NASA Astrophysics Data System (ADS)

    Hirayama, Yuzo; Nagatani, Hiroyuki; Saishu, Tatsuo; Fukushima, Rieko; Taira, Kazuki

    2006-10-01

    We have developed prototypes of flatbed-type autostereoscopic display systems using one-dimensional integral imaging method. The integral imaging system reproduces light beams similar of those produced by a real object. Our display architecture is suitable for flatbed configurations because it has a large margin for viewing distance and angle and has continuous motion parallax. We have applied our technology to 15.4-inch displays. We realized horizontal resolution of 480 with 12 parallaxes due to adoption of mosaic pixel arrangement of the display panel. It allows viewers to see high quality autostereoscopic images. Viewing the display from angle allows the viewer to experience 3-D images that stand out several centimeters from the surface of the display. Mixed reality of virtual 3-D objects and real objects are also realized on a flatbed display. In seeking reproduction of natural 3-D images on the flatbed display, we developed proprietary software. The fast playback of the CG movie contents and real-time interaction are realized with the aid of a graphics card. Realization of the safety 3-D images to the human beings is very important. Therefore, we have measured the effects on the visual function and evaluated the biological effects. For example, the accommodation and convergence were measured at the same time. The various biological effects are also measured before and after the task of watching 3-D images. We have found that our displays show better results than those to a conventional stereoscopic display. The new technology opens up new areas of application for 3-D displays, including arcade games, e-learning, simulations of buildings and landscapes, and even 3-D menus in restaurants.

  3. Full-color large-scaled computer-generated holograms for physical and non-physical objects

    NASA Astrophysics Data System (ADS)

    Matsushima, Kyoji; Tsuchiyama, Yasuhiro; Sonobe, Noriaki; Masuji, Shoya; Yamaguchi, Masahiro; Sakamoto, Yuji

    2017-05-01

    Several full-color high-definition CGHs are created for reconstructing 3D scenes including real-existing physical objects. The field of the physical objects are generated or captured by employing three techniques; 3D scanner, synthetic aperture digital holography, and multi-viewpoint images. Full-color reconstruction of high-definition CGHs is realized by RGB color filters. The optical reconstructions are presented for verifying these techniques.

  4. 15-month-olds' transfer of learning between touch screen and real-world displays: language cues and cognitive loads.

    PubMed

    Zack, Elizabeth; Gerhardstein, Peter; Meltzoff, Andrew N; Barr, Rachel

    2013-02-01

    Infants have difficulty transferring information between 2D and 3D sources. The current study extends Zack, Barr, Gerhardstein, Dickerson & Meltzoff's (2009) touch screen imitation task to examine whether the addition of specific language cues significantly facilitates 15-month-olds' transfer of learning between touch screens and real-world 3D objects. The addition of two kinds of linguistic cues (object label plus verb or nonsense name) did not elevate action imitation significantly above levels observed when such language cues were not used. Language cues hindered infants' performance in the 3D→2D direction of transfer, but only for the object label plus verb condition. The lack of a facilitative effect of language is discussed in terms of competing cognitive loads imposed by conjointly transferring information across dimensions and processing linguistic cues in an action imitation task at this age. © 2012 The Authors. Scandinavian Journal of Psychology © 2012 The Scandinavian Psychological Associations.

  5. A knowledge-based object recognition system for applications in the space station

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.

    1988-01-01

    A knowledge-based three-dimensional (3D) object recognition system is being developed. The system uses primitive-based hierarchical relational and structural matching for the recognition of 3D objects in the two-dimensional (2D) image for interpretation of the 3D scene. At present, the pre-processing, low-level preliminary segmentation, rule-based segmentation, and the feature extraction are completed. The data structure of the primitive viewing knowledge-base (PVKB) is also completed. Algorithms and programs based on attribute-trees matching for decomposing the segmented data into valid primitives were developed. The frame-based structural and relational descriptions of some objects were created and stored in a knowledge-base. This knowledge-base of the frame-based descriptions were developed on the MICROVAX-AI microcomputer in LISP environment. The simulated 3D scene of simple non-overlapping objects as well as real camera data of images of 3D objects of low-complexity have been successfully interpreted.

  6. 3D-Web-GIS RFID location sensing system for construction objects.

    PubMed

    Ko, Chien-Ho

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency.

  7. 3D-Web-GIS RFID Location Sensing System for Construction Objects

    PubMed Central

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency. PMID:23864821

  8. Research on 3D virtual campus scene modeling based on 3ds Max and VRML

    NASA Astrophysics Data System (ADS)

    Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue

    2015-12-01

    With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.

  9. Model-based registration of multi-rigid-body for augmented reality

    NASA Astrophysics Data System (ADS)

    Ikeda, Sei; Hori, Hajime; Imura, Masataka; Manabe, Yoshitsugu; Chihara, Kunihiro

    2009-02-01

    Geometric registration between a virtual object and the real space is the most basic problem in augmented reality. Model-based tracking methods allow us to estimate three-dimensional (3-D) position and orientation of a real object by using a textured 3-D model instead of visual marker. However, it is difficult to apply existing model-based tracking methods to the objects that have movable parts such as a display of a mobile phone, because these methods suppose a single, rigid-body model. In this research, we propose a novel model-based registration method for multi rigid-body objects. For each frame, the 3-D models of each rigid part of the object are first rendered according to estimated motion and transformation from the previous frame. Second, control points are determined by detecting the edges of the rendered image and sampling pixels on these edges. Motion and transformation are then simultaneously calculated from distances between the edges and the control points. The validity of the proposed method is demonstrated through experiments using synthetic videos.

  10. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  11. Real-time interactive 3D manipulation of particles viewed in two orthogonal observation planes.

    PubMed

    Perch-Nielsen, Ivan; Rodrigo, Peter; Glückstad, Jesper

    2005-04-18

    The generalized phase contrast (GPC) method has been applied to transform a single TEM00 beam into a manifold of counterpropagating-beam traps capable of real-time interactive manipulation of multiple microparticles in three dimensions (3D). This paper reports on the use of low numerical aperture (NA), non-immersion, objective lenses in an implementation of the GPC-based 3D trapping system. Contrary to high-NA based optical tweezers, the GPC trapping system demonstrated here operates with long working distance (>10 mm), and offers a wider manipulation region and a larger field of view for imaging through each of the two opposing objective lenses. As a consequence of the large working distance, simultaneous monitoring of the trapped particles in a second orthogonal observation plane is demonstrated.

  12. When the display matters: A multifaceted perspective on 3D geovisualizations

    NASA Astrophysics Data System (ADS)

    Juřík, Vojtěch; Herman, Lukáš; Šašinka, Čeněk; Stachoň, Zdeněk; Chmelík, Jiří

    2017-04-01

    This study explores the influence of stereoscopic (real) 3D and monoscopic (pseudo) 3D visualization on the human ability to reckon altitude information in noninteractive and interactive 3D geovisualizations. A two phased experiment was carried out to compare the performance of two groups of participants, one of them using the real 3D and the other one pseudo 3D visualization of geographical data. A homogeneous group of 61 psychology students, inexperienced in processing of geographical data, were tested with respect to their efficiency at identifying altitudes of the displayed landscape. The first phase of the experiment was designed as non-interactive, where static 3D visual displayswere presented; the second phase was designed as interactive and the participants were allowed to explore the scene by adjusting the position of the virtual camera. The investigated variables included accuracy at altitude identification, time demands and the amount of the participant's motor activity performed during interaction with geovisualization. The interface was created using a Motion Capture system, Wii Remote Controller, widescreen projection and the passive Dolby 3D technology (for real 3D vision). The real 3D visual display was shown to significantly increase the accuracy of the landscape altitude identification in non-interactive tasks. As expected, in the interactive phase there were differences in accuracy flattened out between groups due to the possibility of interaction, with no other statistically significant differences in completion times or motor activity. The increased number of omitted objects in real 3D condition was further subjected to an exploratory analysis.

  13. Augmented reality three-dimensional object visualization and recognition with axially distributed sensing.

    PubMed

    Markman, Adam; Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-01-15

    An augmented reality (AR) smartglass display combines real-world scenes with digital information enabling the rapid growth of AR-based applications. We present an augmented reality-based approach for three-dimensional (3D) optical visualization and object recognition using axially distributed sensing (ADS). For object recognition, the 3D scene is reconstructed, and feature extraction is performed by calculating the histogram of oriented gradients (HOG) of a sliding window. A support vector machine (SVM) is then used for classification. Once an object has been identified, the 3D reconstructed scene with the detected object is optically displayed in the smartglasses allowing the user to see the object, remove partial occlusions of the object, and provide critical information about the object such as 3D coordinates, which are not possible with conventional AR devices. To the best of our knowledge, this is the first report on combining axially distributed sensing with 3D object visualization and recognition for applications to augmented reality. The proposed approach can have benefits for many applications, including medical, military, transportation, and manufacturing.

  14. Real-time three-dimensional soft tissue reconstruction for laparoscopic surgery.

    PubMed

    Kowalczuk, Jędrzej; Meyer, Avishai; Carlson, Jay; Psota, Eric T; Buettner, Shelby; Pérez, Lance C; Farritor, Shane M; Oleynikov, Dmitry

    2012-12-01

    Accurate real-time 3D models of the operating field have the potential to enable augmented reality for endoscopic surgery. A new system is proposed to create real-time 3D models of the operating field that uses a custom miniaturized stereoscopic video camera attached to a laparoscope and an image-based reconstruction algorithm implemented on a graphics processing unit (GPU). The proposed system was evaluated in a porcine model that approximates the viewing conditions of in vivo surgery. To assess the quality of the models, a synthetic view of the operating field was produced by overlaying a color image on the reconstructed 3D model, and an image rendered from the 3D model was compared with a 2D image captured from the same view. Experiments conducted with an object of known geometry demonstrate that the system produces 3D models accurate to within 1.5 mm. The ability to produce accurate real-time 3D models of the operating field is a significant advancement toward augmented reality in minimally invasive surgery. An imaging system with this capability will potentially transform surgery by helping novice and expert surgeons alike to delineate variance in internal anatomy accurately.

  15. Real-time three-dimensional color doppler evaluation of the flow convergence zone for quantification of mitral regurgitation: Validation experimental animal study and initial clinical experience

    NASA Technical Reports Server (NTRS)

    Sitges, Marta; Jones, Michael; Shiota, Takahiro; Qin, Jian Xin; Tsujino, Hiroyuki; Bauer, Fabrice; Kim, Yong Jin; Agler, Deborah A.; Cardon, Lisa A.; Zetts, Arthur D.; hide

    2003-01-01

    BACKGROUND: Pitfalls of the flow convergence (FC) method, including 2-dimensional imaging of the 3-dimensional (3D) geometry of the FC surface, can lead to erroneous quantification of mitral regurgitation (MR). This limitation may be mitigated by the use of real-time 3D color Doppler echocardiography (CE). Our objective was to validate a real-time 3D navigation method for MR quantification. METHODS: In 12 sheep with surgically induced chronic MR, 37 different hemodynamic conditions were studied with real-time 3DCE. Using real-time 3D navigation, the radius of the largest hemispherical FC zone was located and measured. MR volume was quantified according to the FC method after observing the shape of FC in 3D space. Aortic and mitral electromagnetic flow probes and meters were balanced against each other to determine reference MR volume. As an initial clinical application study, 22 patients with chronic MR were also studied with this real-time 3DCE-FC method. Left ventricular (LV) outflow tract automated cardiac flow measurement (Toshiba Corp, Tokyo, Japan) and real-time 3D LV stroke volume were used to quantify the reference MR volume (MR volume = 3DLV stroke volume - automated cardiac flow measurement). RESULTS: In the sheep model, a good correlation and agreement was seen between MR volume by real-time 3DCE and electromagnetic (y = 0.77x + 1.48, r = 0.87, P <.001, delta = -0.91 +/- 2.65 mL). In patients, real-time 3DCE-derived MR volume also showed a good correlation and agreement with the reference method (y = 0.89x - 0.38, r = 0.93, P <.001, delta = -4.8 +/- 7.6 mL). CONCLUSIONS: real-time 3DCE can capture the entire FC image, permitting geometrical recognition of the FC zone geometry and reliable MR quantification.

  16. Real-time multiple human perception with color-depth cameras on a mobile robot.

    PubMed

    Zhang, Hao; Reardon, Christopher; Parker, Lynne E

    2013-10-01

    The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an accurate system for real-time 3-D perception of humans by a mobile robot.

  17. SU-D-201-05: On the Automatic Recognition of Patient Safety Hazards in a Radiotherapy Setup Using a Novel 3D Camera System and a Deep Learning Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santhanam, A; Min, Y; Beron, P

    Purpose: Patient safety hazards such as a wrong patient/site getting treated can lead to catastrophic results. The purpose of this project is to automatically detect potential patient safety hazards during the radiotherapy setup and alert the therapist before the treatment is initiated. Methods: We employed a set of co-located and co-registered 3D cameras placed inside the treatment room. Each camera provided a point-cloud of fraxels (fragment pixels with 3D depth information). Each of the cameras were calibrated using a custom-built calibration target to provide 3D information with less than 2 mm error in the 500 mm neighborhood around the isocenter.more » To identify potential patient safety hazards, the treatment room components and the patient’s body needed to be identified and tracked in real-time. For feature recognition purposes, we used a graph-cut based feature recognition with principal component analysis (PCA) based feature-to-object correlation to segment the objects in real-time. Changes in the object’s position were tracked using the CamShift algorithm. The 3D object information was then stored for each classified object (e.g. gantry, couch). A deep learning framework was then used to analyze all the classified objects in both 2D and 3D and was then used to fine-tune a convolutional network for object recognition. The number of network layers were optimized to identify the tracked objects with >95% accuracy. Results: Our systematic analyses showed that, the system was effectively able to recognize wrong patient setups and wrong patient accessories. The combined usage of 2D camera information (color + depth) enabled a topology-preserving approach to verify patient safety hazards in an automatic manner and even in scenarios where the depth information is partially available. Conclusion: By utilizing the 3D cameras inside the treatment room and a deep learning based image classification, potential patient safety hazards can be effectively avoided.« less

  18. Algorithms for Haptic Rendering of 3D Objects

    NASA Technical Reports Server (NTRS)

    Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam

    2003-01-01

    Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).

  19. fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media

    NASA Astrophysics Data System (ADS)

    Yoshida, Shunsuke

    2012-06-01

    A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.

  20. 3D visualization techniques for the STEREO-mission

    NASA Astrophysics Data System (ADS)

    Wiegelmann, T.; Podlipnik, B.; Inhester, B.; Feng, L.; Ruan, P.

    The forthcoming STEREO-mission will observe the Sun from two different viewpoints We expect about 2GB data per day which ask for suitable data presentation techniques A key feature of STEREO is that it will provide for the first time a 3D-view of the Sun and the solar corona In our normal environment we see objects three dimensional because the light from real 3D objects needs different travel times to our left and right eye As a consequence we see slightly different images with our eyes which gives us information about the depth of objects and a corresponding 3D impression Techniques for the 3D-visualization of scientific and other data on paper TV computer screen cinema etc are well known e g two colour anaglyph technique shutter glasses polarization filters and head-mounted displays We discuss advantages and disadvantages of these techniques and how they can be applied to STEREO-data The 3D-visualization techniques are not limited to visual images but can be also used to show the reconstructed coronal magnetic field and energy and helicity distribution In the advent of STEREO we test the method with data from SOHO which provides us different viewpoints by the solar rotation This restricts the analysis to structures which remain stationary for several days Real STEREO-data will not be affected by these limitations however

  1. Foliage penetration by using 4-D point cloud data

    NASA Astrophysics Data System (ADS)

    Méndez Rodríguez, Javier; Sánchez-Reyes, Pedro J.; Cruz-Rivera, Sol M.

    2012-06-01

    Real-time awareness and rapid target detection are critical for the success of military missions. New technologies capable of detecting targets concealed in forest areas are needed in order to track and identify possible threats. Currently, LAser Detection And Ranging (LADAR) systems are capable of detecting obscured targets; however, tracking capabilities are severely limited. Now, a new LADAR-derived technology is under development to generate 4-D datasets (3-D video in a point cloud format). As such, there is a new need for algorithms that are able to process data in real time. We propose an algorithm capable of removing vegetation and other objects that may obfuscate concealed targets in a real 3-D environment. The algorithm is based on wavelets and can be used as a pre-processing step in a target recognition algorithm. Applications of the algorithm in a real-time 3-D system could help make pilots aware of high risk hidden targets such as tanks and weapons, among others. We will be using a 4-D simulated point cloud data to demonstrate the capabilities of our algorithm.

  2. Real-Time Simulation

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Coryphaeus Software, founded in 1989 by former NASA electronic engineer Steve Lakowske, creates real-time 3D software. Designer's Workbench, the company flagship product, is a modeling and simulation tool for the development of both static and dynamic 3D databases. Other products soon followed. Activation, specifically designed for game developers, allows developers to play and test the 3D games before they commit to a target platform. Game publishers can shorten development time and prove the "playability" of the title, maximizing their chances of introducing a smash hit. Another product, EasyT, lets users create massive, realistic representation of Earth terrains that can be viewed and traversed in real time. Finally, EasyScene software control the actions among interactive objects within a virtual world. Coryphaeus products are used on Silican Graphics workstation and supercomputers to simulate real-world performance in synthetic environments. Customers include aerospace, aviation, architectural and engineering firms, game developers, and the entertainment industry.

  3. Real-time 3D measurement based on structured light illumination considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, ShiLing

    2014-12-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. In traditional 3-D measurement system where the processing time is not a key factor, camera lens distortion correction is performed directly. However, for the time-critical high-speed applications, the time-consuming correction algorithm is inappropriate to be performed directly during the real-time process. To cope with this issue, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. And a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the merit of the LUT, the 3-D reconstruction can be achieved at 92.34 frames per second.

  4. Photogrammetry for rapid prototyping: development of noncontact 3D reconstruction technologies

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.

    2002-04-01

    An important stage of rapid prototyping technology is generating computer 3D model of an object to be reproduced. Wide variety of techniques for 3D model generation exists beginning with manual 3D models generation and finishing with full-automated reverse engineering system. The progress in CCD sensors and computers provides the background for integration of photogrammetry as an accurate 3D data source with CAD/CAM. The paper presents the results of developing photogrammetric methods for non-contact spatial coordinates measurements and generation of computer 3D model of real objects. The technology is based on object convergent images processing for calculating its 3D coordinates and surface reconstruction. The hardware used for spatial coordinates measurements is based on PC as central processing unit and video camera as image acquisition device. The original software for Windows 9X realizes the complete technology of 3D reconstruction for rapid input of geometry data in CAD/CAM systems. Technical characteristics of developed systems are given along with the results of applying for various tasks of 3D reconstruction. The paper describes the techniques used for non-contact measurements and the methods providing metric characteristics of reconstructed 3D model. Also the results of system application for 3D reconstruction of complex industrial objects are presented.

  5. Real-Time 3d Reconstruction from Images Taken from AN Uav

    NASA Astrophysics Data System (ADS)

    Zingoni, A.; Diani, M.; Corsini, G.; Masini, A.

    2015-08-01

    We designed a method for creating 3D models of objects and areas from two aerial images acquired from an UAV. The models are generated automatically and in real-time, and consist in dense and true-colour reconstructions of the considered areas, which give the impression to the operator to be physically present within the scene. The proposed method only needs a cheap compact camera, mounted on a small UAV. No additional instrumentation is necessary, so that the costs are very limited. The method consists of two main parts: the design of the acquisition system and the 3D reconstruction algorithm. In the first part, the choices for the acquisition geometry and for the camera parameters are optimized, in order to yield the best performance. In the second part, a reconstruction algorithm extracts the 3D model from the two acquired images, maximizing the accuracy under the real-time constraint. A test was performed in monitoring a construction yard, obtaining very promising results. Highly realistic and easy-to-interpret 3D models of objects and areas of interest were produced in less than one second, with an accuracy of about 0.5m. For its characteristics, the designed method is suitable for video-surveillance, remote sensing and monitoring, especially in those applications that require intuitive and reliable information quickly, as disasters monitoring, search and rescue and area surveillance.

  6. Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

    PubMed Central

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454

  7. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation.

    PubMed

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-12-12

    Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.

  8. Recent advances in the development and transfer of machine vision technologies for space

    NASA Technical Reports Server (NTRS)

    Defigueiredo, Rui J. P.; Pendleton, Thomas

    1991-01-01

    Recent work concerned with real-time machine vision is briefly reviewed. This work includes methodologies and techniques for optimal illumination, shape-from-shading of general (non-Lambertian) 3D surfaces, laser vision devices and technology, high level vision, sensor fusion, real-time computing, artificial neural network design and use, and motion estimation. Two new methods that are currently being developed for object recognition in clutter and for 3D attitude tracking based on line correspondence are discussed.

  9. Surface-Plasmon Holography with White-Light Illumination

    NASA Astrophysics Data System (ADS)

    Ozaki, Miyu; Kato, Jun-ichi; Kawata, Satoshi

    2011-04-01

    The recently emerging three-dimensional (3D) displays in the electronic shops imitate depth illusion by overlapping two parallax 2D images through either polarized glasses that viewers are required to wear or lenticular lenses fixed directly on the display. Holography, on the other hand, provides real 3D imaging, although usually limiting colors to monochrome. The so-called rainbow holograms—mounted, for example, on credit cards—are also produced from parallax images that change color with viewing angle. We report on a holographic technique based on surface plasmons that can reconstruct true 3D color images, where the colors are reconstructed by satisfying resonance conditions of surface plasmon polaritons for individual wavelengths. Such real 3D color images can be viewed from any angle, just like the original object.

  10. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  11. The Value of 3D Printing Models of Left Atrial Appendage Using Real-Time 3D Transesophageal Echocardiographic Data in Left Atrial Appendage Occlusion: Applications toward an Era of Truly Personalized Medicine.

    PubMed

    Liu, Peng; Liu, Rijing; Zhang, Yan; Liu, Yingfeng; Tang, Xiaoming; Cheng, Yanzhen

    The objective of this study was to assess the clinical feasibility of generating 3D printing models of left atrial appendage (LAA) using real-time 3D transesophageal echocardiogram (TEE) data for preoperative reference of LAA occlusion. Percutaneous LAA occlusion can effectively prevent patients with atrial fibrillation from stroke. However, the anatomical structure of LAA is so complicated that adequate information of its structure is essential for successful LAA occlusion. Emerging 3D printing technology has the demonstrated potential to structure more accurately than conventional imaging modalities by creating tangible patient-specific models. Typically, 3D printing data sets are acquired from CT and MRI, which may involve intravenous contrast, sedation, and ionizing radiation. It has been reported that 3D models of LAA were successfully created by the data acquired from CT. However, 3D printing of the LAA using real-time 3D TEE data has not yet been explored. Acquisition of 3D transesophageal echocardiographic data from 8 patients with atrial fibrillation was performed using the Philips EPIQ7 ultrasound system. Raw echocardiographic image data were opened in Philips QLAB and converted to 'Cartesian DICOM' format and imported into Mimics® software to create 3D models of LAA, which were printed using a rubber-like material. The printed 3D models were then used for preoperative reference and procedural simulation in LAA occlusion. We successfully printed LAAs of 8 patients. Each LAA costs approximately CNY 800-1,000 and the total process takes 16-17 h. Seven of the 8 Watchman devices predicted by preprocedural 2D TEE images were of the same sizes as those placed in the real operation. Interestingly, 3D printing models were highly reflective of the shape and size of LAAs, and all device sizes predicted by the 3D printing model were fully consistent with those placed in the real operation. Also, the 3D printed model could predict operating difficulty and the presence of a peridevice leak. 3D printing of the LAA using real-time 3D transesophageal echocardiographic data has a perfect and rapid application in LAA occlusion to assist with physician planning and decision making. © 2016 S. Karger AG, Basel.

  12. Extracting 3d Semantic Information from Video Surveillance System Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Zhang, J. S.; Cao, J.; Mao, B.; Shen, D. Q.

    2018-04-01

    At present, intelligent video analysis technology has been widely used in various fields. Object tracking is one of the important part of intelligent video surveillance, but the traditional target tracking technology based on the pixel coordinate system in images still exists some unavoidable problems. Target tracking based on pixel can't reflect the real position information of targets, and it is difficult to track objects across scenes. Based on the analysis of Zhengyou Zhang's camera calibration method, this paper presents a method of target tracking based on the target's space coordinate system after converting the 2-D coordinate of the target into 3-D coordinate. It can be seen from the experimental results: Our method can restore the real position change information of targets well, and can also accurately get the trajectory of the target in space.

  13. Handheld real-time volumetric 3-D gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Haefner, Andrew; Barnowski, Ross; Luke, Paul; Amman, Mark; Vetter, Kai

    2017-06-01

    This paper presents the concept of real-time fusion of gamma-ray imaging and visual scene data for a hand-held mobile Compton imaging system in 3-D. The ability to obtain and integrate both gamma-ray and scene data from a mobile platform enables improved capabilities in the localization and mapping of radioactive materials. This not only enhances the ability to localize these materials, but it also provides important contextual information of the scene which once acquired can be reviewed and further analyzed subsequently. To demonstrate these concepts, the high-efficiency multimode imager (HEMI) is used in a hand-portable implementation in combination with a Microsoft Kinect sensor. This sensor, in conjunction with open-source software, provides the ability to create a 3-D model of the scene and to track the position and orientation of HEMI in real-time. By combining the gamma-ray data and visual data, accurate 3-D maps of gamma-ray sources are produced in real-time. This approach is extended to map the location of radioactive materials within objects with unknown geometry.

  14. The 3D scanner prototype utilize object profile imaging using line laser and octave software

    NASA Astrophysics Data System (ADS)

    Nurdini, Mugi; Manunggal, Trikarsa Tirtadwipa; Samsi, Agus

    2016-11-01

    Three-dimensional scanner or 3D Scanner is a device to reconstruct the real object into digital form on a computer. 3D Scanner is a technology that is being developed, especially in developed countries, where the current 3D Scanner devices is the advanced version with a very expensive prices. This study is basically a simple prototype of 3D Scanner with a very low investment costs. 3D Scanner prototype device consists of a webcam, a rotating desk system controlled by a stepper motor and Arduino UNO, and a line laser. Objects that limit the research is the object with same radius from its center point (object pivot). Scanning is performed by using object profile imaging by line laser which is then captured by the camera and processed by a computer (image processing) using Octave software. On each image acquisition, the scanned object on a rotating desk rotated by a certain degree, so for one full turn multiple images of a number of existing side are finally obtained. Then, the profile of the entire images is extracted in order to obtain digital object dimension. Digital dimension is calibrated by length standard, called gage block. Overall dimensions are then digitally reconstructed into a three-dimensional object. Validation of the scanned object reconstruction of the original object dimensions expressed as a percentage error. Based on the results of data validation, horizontal dimension error is about 5% to 23% and vertical dimension error is about +/- 3%.

  15. Fast interactive real-time volume rendering of real-time three-dimensional echocardiography: an implementation for low-end computers

    NASA Technical Reports Server (NTRS)

    Saracino, G.; Greenberg, N. L.; Shiota, T.; Corsi, C.; Lamberti, C.; Thomas, J. D.

    2002-01-01

    Real-time three-dimensional echocardiography (RT3DE) is an innovative cardiac imaging modality. However, partly due to lack of user-friendly software, RT3DE has not been widely accepted as a clinical tool. The object of this study was to develop and implement a fast and interactive volume renderer of RT3DE datasets designed for a clinical environment where speed and simplicity are not secondary to accuracy. Thirty-six patients (20 regurgitation, 8 normal, 8 cardiomyopathy) were imaged using RT3DE. Using our newly developed software, all 3D data sets were rendered in real-time throughout the cardiac cycle and assessment of cardiac function and pathology was performed for each case. The real-time interactive volume visualization system is user friendly and instantly provides consistent and reliable 3D images without expensive workstations or dedicated hardware. We believe that this novel tool can be used clinically for dynamic visualization of cardiac anatomy.

  16. Prototyping a Sensor Enabled 3d Citymodel on Geospatial Managed Objects

    NASA Astrophysics Data System (ADS)

    Kjems, E.; Kolář, J.

    2013-09-01

    One of the major development efforts within the GI Science domain are pointing at sensor based information and the usage of real time information coming from geographic referenced features in general. At the same time 3D City models are mostly justified as being objects for visualization purposes rather than constituting the foundation of a geographic data representation of the world. The combination of 3D city models and real time information based systems though can provide a whole new setup for data fusion within an urban environment and provide time critical information preserving our limited resources in the most sustainable way. Using 3D models with consistent object definitions give us the possibility to avoid troublesome abstractions of reality, and design even complex urban systems fusing information from various sources of data. These systems are difficult to design with the traditional software development approach based on major software packages and traditional data exchange. The data stream is varying from urban domain to urban domain and from system to system why it is almost impossible to design a complete system taking care of all thinkable instances now and in the future within one constraint software design complex. On several occasions we have been advocating for a new end advanced formulation of real world features using the concept of Geospatial Managed Objects (GMO). This paper presents the outcome of the InfraWorld project, a 4 million Euro project financed primarily by the Norwegian Research Council where the concept of GMO's have been applied in various situations on various running platforms of an urban system. The paper will be focusing on user experiences and interfaces rather then core technical and developmental issues. The project was primarily focusing on prototyping rather than realistic implementations although the results concerning applicability are quite clear.

  17. Real-time volume rendering of 4D image using 3D texture mapping

    NASA Astrophysics Data System (ADS)

    Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il

    2001-05-01

    Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.

  18. Real-time electroholography using a multiple-graphics processing unit cluster system with a single spatial light modulator and the InfiniBand network

    NASA Astrophysics Data System (ADS)

    Niwase, Hiroaki; Takada, Naoki; Araki, Hiromitsu; Maeda, Yuki; Fujiwara, Masato; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2016-09-01

    Parallel calculations of large-pixel-count computer-generated holograms (CGHs) are suitable for multiple-graphics processing unit (multi-GPU) cluster systems. However, it is not easy for a multi-GPU cluster system to accomplish fast CGH calculations when CGH transfers between PCs are required. In these cases, the CGH transfer between the PCs becomes a bottleneck. Usually, this problem occurs only in multi-GPU cluster systems with a single spatial light modulator. To overcome this problem, we propose a simple method using the InfiniBand network. The computational speed of the proposed method using 13 GPUs (NVIDIA GeForce GTX TITAN X) was more than 3000 times faster than that of a CPU (Intel Core i7 4770) when the number of three-dimensional (3-D) object points exceeded 20,480. In practice, we achieved ˜40 tera floating point operations per second (TFLOPS) when the number of 3-D object points exceeded 40,960. Our proposed method was able to reconstruct a real-time movie of a 3-D object comprising 95,949 points.

  19. [Real-time three-dimensional (4D) ultrasound-guided prostatic biopsies on a phantom. Comparative study versus 2D guidance].

    PubMed

    Long, Jean-Alexandre; Daanen, Vincent; Moreau-Gaudry, Alexandre; Troccaz, Jocelyne; Rambeaud, Jean-Jacques; Descotes, Jean-Luc

    2007-11-01

    The objective of this study was to determine the added value of real-time three-dimensional (4D) ultrasound guidance of prostatic biopsies on a prostate phantom in terms of the precision of guidance and distribution. A prostate phantom was constructed. A real-time 3D ultrasonograph connected to a transrectal 5.9 MHz volumic transducer was used. Fourteen operators performed 336 biopsies with 2D guidance then 4D guidance according to a 12-biopsy protocol. Biopsy tracts were modelled by segmentation in a 3D ultrasound volume. Specific software allowed visualization of biopsy tracts in the reference prostate and evaluated the zone biopsied. A comparative study was performed to determine the added value of 4D guidance compared to 2D guidance by evaluating the precision of entry points and target points. The distribution was evaluated by measuring the volume investigated and by a redundancy ratio of the biopsy points. The precision of the biopsy protocol was significantly improved by 4D guidance (p = 0.037). No increase of the biopsy volume and no improvement of the distribution of biopsies were observed with 4D compared to 2D guidance. The real-time 3D ultrasound-guided prostate biopsy technique on a phantom model appears to improve the precision and reproducibility of a biopsy protocol, but the distribution of biopsies does not appear to be improved.

  20. High-speed real-time 3-D coordinates measurement based on fringe projection profilometry considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, Shi Ling

    2014-10-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. However, the camera lens is never perfect and the lens distortion does influence the accuracy of the measurement result, which is often overlooked in the existing real-time 3-D shape measurement systems. To this end, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. The out-of-plane height is obtained firstly and the acquisition for the two corresponding in-plane coordinates follows on the basis of the solved height. Besides, a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the generated LUTs, a 3-D reconstruction speed of 92.34 frames per second can be achieved.

  1. Experimental Demonstration of Longitudinal Magnification

    ERIC Educational Resources Information Center

    Razpet, Nada; Susman, Katarina; Cepic, Mojca

    2009-01-01

    We describe an experiment which enables the observation of longitudinal magnification for the real image of a three-dimensional (3D) object formed by a converging lens. The experiment also shows the absence of longitudinal inversion. Possible reasons for misconceptions with respect to real images and longitudinal inversions are discussed and a…

  2. Reproducing 2D breast mammography images with 3D printed phantoms

    NASA Astrophysics Data System (ADS)

    Clark, Matthew; Ghammraoui, Bahaa; Badal, Andreu

    2016-03-01

    Mammography is currently the standard imaging modality used to screen women for breast abnormalities and, as a result, it is a tool of great importance for the early detection of breast cancer. Physical phantoms are commonly used as surrogates of breast tissue to evaluate some aspects of the performance of mammography systems. However, most phantoms do not reproduce the anatomic heterogeneity of real breasts. New fabrication technologies, such as 3D printing, have created the opportunity to build more complex, anatomically realistic breast phantoms that could potentially assist in the evaluation of mammography systems. The primary objective of this work is to present a simple, easily reproducible methodology to design and print 3D objects that replicate the attenuation profile observed in real 2D mammograms. The secondary objective is to evaluate the capabilities and limitations of the competing 3D printing technologies, and characterize the x-ray properties of the different materials they use. Printable phantoms can be created using the open-source code introduced in this work, which processes a raw mammography image to estimate the amount of x-ray attenuation at each pixel, and outputs a triangle mesh object that encodes the observed attenuation map. The conversion from the observed pixel gray value to a column of printed material with equivalent attenuation requires certain assumptions and knowledge of multiple imaging system parameters, such as x-ray energy spectrum, source-to-object distance, compressed breast thickness, and average breast material attenuation. A detailed description of the new software, a characterization of the printed materials using x-ray spectroscopy, and an evaluation of the realism of the sample printed phantoms are presented.

  3. Subjective evaluation of two stereoscopic imaging systems exploiting visual attention to improve 3D quality of experience

    NASA Astrophysics Data System (ADS)

    Hanhart, Philippe; Ebrahimi, Touradj

    2014-03-01

    Crosstalk and vergence-accommodation rivalry negatively impact the quality of experience (QoE) provided by stereoscopic displays. However, exploiting visual attention and adapting the 3D rendering process on the fly can reduce these drawbacks. In this paper, we propose and evaluate two different approaches that exploit visual attention to improve 3D QoE on stereoscopic displays: an offline system, which uses a saliency map to predict gaze position, and an online system, which uses a remote eye tracking system to measure real time gaze positions. The gaze points were used in conjunction with the disparity map to extract the disparity of the object-of-interest. Horizontal image translation was performed to bring the fixated object on the screen plane. The user preference between standard 3D mode and the two proposed systems was evaluated through a subjective evaluation. Results show that exploiting visual attention significantly improves image quality and visual comfort, with a slight advantage for real time gaze determination. Depth quality is also improved, but the difference is not significant.

  4. The Role of Interactional Quality in Learning from Touch Screens during Infancy: Context Matters.

    PubMed

    Zack, Elizabeth; Barr, Rachel

    2016-01-01

    Interactional quality has been shown to enhance learning during book reading and play, but has not been examined during touch screen use. Learning to apply knowledge from a touch screen is complex for infants because it involves transfer of learning between a two-dimensional (2D) screen and three-dimensional (3D) object in the physical world. This study uses a touch screen procedure to examine interactional quality measured via maternal structuring, diversity of maternal language, and dyadic emotional responsiveness and infant outcomes during a transfer of learning task. Fifty 15-month-old infants and their mothers participated in this semi-naturalistic teaching task. Mothers were given a 3D object, and a static image of the object presented on a touch screen. Mothers had 5 min to teach their infant that a button on the real toy works in the same way as a virtual button on the touch screen (or vice versa). Overall, 64% of infants learned how to make the button work, transferring learning from the touch screen to the 3D object or vice versa. Infants were just as successful in the 3D to 2D transfer direction as they were in the 2D to 3D transfer direction. A cluster analysis based on emotional responsiveness, the proportion of diverse maternal verbal input, and amount of maternal structuring resulted in two levels of interactional quality: high quality and moderate quality. A logistic regression revealed the level of interactional quality predicted infant transfer. Infants were 19 times more likely to succeed and transfer learning between the touch screen and real object if they were in a high interactional quality dyad, even after controlling for infant activity levels. The present findings suggest that interactional quality between mother and infant plays an important role in making touch screens effective teaching tools for infants' learning.

  5. The Role of Interactional Quality in Learning from Touch Screens during Infancy: Context Matters

    PubMed Central

    Zack, Elizabeth; Barr, Rachel

    2016-01-01

    Interactional quality has been shown to enhance learning during book reading and play, but has not been examined during touch screen use. Learning to apply knowledge from a touch screen is complex for infants because it involves transfer of learning between a two-dimensional (2D) screen and three-dimensional (3D) object in the physical world. This study uses a touch screen procedure to examine interactional quality measured via maternal structuring, diversity of maternal language, and dyadic emotional responsiveness and infant outcomes during a transfer of learning task. Fifty 15-month-old infants and their mothers participated in this semi-naturalistic teaching task. Mothers were given a 3D object, and a static image of the object presented on a touch screen. Mothers had 5 min to teach their infant that a button on the real toy works in the same way as a virtual button on the touch screen (or vice versa). Overall, 64% of infants learned how to make the button work, transferring learning from the touch screen to the 3D object or vice versa. Infants were just as successful in the 3D to 2D transfer direction as they were in the 2D to 3D transfer direction. A cluster analysis based on emotional responsiveness, the proportion of diverse maternal verbal input, and amount of maternal structuring resulted in two levels of interactional quality: high quality and moderate quality. A logistic regression revealed the level of interactional quality predicted infant transfer. Infants were 19 times more likely to succeed and transfer learning between the touch screen and real object if they were in a high interactional quality dyad, even after controlling for infant activity levels. The present findings suggest that interactional quality between mother and infant plays an important role in making touch screens effective teaching tools for infants’ learning. PMID:27625613

  6. V-Man Generation for 3-D Real Time Animation. Chapter 5

    NASA Technical Reports Server (NTRS)

    Nebel, Jean-Christophe; Sibiryakov, Alexander; Ju, Xiangyang

    2007-01-01

    The V-Man project has developed an intuitive authoring and intelligent system to create, animate, control and interact in real-time with a new generation of 3D virtual characters: The V-Men. It combines several innovative algorithms coming from Virtual Reality, Physical Simulation, Computer Vision, Robotics and Artificial Intelligence. Given a high-level task like "walk to that spot" or "get that object", a V-Man generates the complete animation required to accomplish the task. V-Men synthesise motion at runtime according to their environment, their task and their physical parameters, drawing upon its unique set of skills manufactured during the character creation. The key to the system is the automated creation of realistic V-Men, not requiring the expertise of an animator. It is based on real human data captured by 3D static and dynamic body scanners, which is then processed to generate firstly animatable body meshes, secondly 3D garments and finally skinned body meshes.

  7. Development and Evaluation of Real-Time Volumetric Compton Gamma-Ray Imaging

    NASA Astrophysics Data System (ADS)

    Barnowski, Ross Wegner

    An approach to gamma-ray imaging has been developed that enables near real-time volumetric (3D) imaging of unknown environments thus improving the utility of gamma-ray imaging for source-search and radiation mapping applications. The approach, herein dubbed scene data fusion (SDF), is based on integrating mobile radiation imagers with real time tracking and scene reconstruction algorithms to enable a mobile mode of operation and 3D localization of gamma-ray sources. The real-time tracking allows the imager to be moved throughout the environment or around a particular object of interest, obtaining the multiple perspectives necessary for standoff 3D imaging. A 3D model of the scene, provided in real-time by a simultaneous localization and mapping (SLAM) algorithm, can be incorporated into the image reconstruction reducing the reconstruction time and improving imaging performance. The SDF concept is demonstrated in this work with a Microsoft Kinect RGB-D sensor, a real-time SLAM solver, and two different mobile gamma-ray imaging platforms. The first is a cart-based imaging platform known as the Volumetric Compton Imager (VCI), comprising two 3D position-sensitive high purity germanium (HPGe) detectors, exhibiting excellent gamma-ray imaging characteristics, but with limited mobility due to the size and weight of the cart. The second system is the High Efficiency Multimodal Imager (HEMI) a hand-portable gamma-ray imager comprising 96 individual cm3 CdZnTe crystals arranged in a two-plane, active-mask configuration. The HEMI instrument has poorer energy and angular resolution than the VCI, but is truly hand-portable, allowing the SDF concept to be tested in multiple environments and for more challenging imaging scenarios. An iterative algorithm based on Compton kinematics is used to reconstruct the gamma-ray source distribution in all three spatial dimensions. Each of the two mobile imaging systems are used to demonstrate SDF for a variety of scenarios, including general search and mapping scenarios with several point gamma-ray sources over the range of energies relevant for Compton imaging. More specific imaging scenarios are also addressed, including directed search and object interrogation scenarios. Finally, the volumetric image quality is quantitatively investigated with respect to the number of Compton events acquired during a measurement, the list-mode uncertainty of the Compton cone data, and the uncertainty in the pose estimate from the real-time tracking algorithm. SDF advances the real-world applicability of gamma-ray imaging for many search, mapping, and verification scenarios by improving the tractability of the gamma-ray image reconstruction and providing context for the 3D localization of gamma-ray sources within the environment in real-time.

  8. Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor.

    PubMed

    Huang, Lvwen; Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing

    2017-08-23

    Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields.

  9. Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor

    PubMed Central

    Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing

    2017-01-01

    Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields. PMID:28832520

  10. Large holographic displays for real-time applications

    NASA Astrophysics Data System (ADS)

    Schwerdtner, A.; Häussler, R.; Leister, N.

    2008-02-01

    Holography is generally accepted as the ultimate approach to display three-dimensional scenes or objects. Principally, the reconstruction of an object from a perfect hologram would appear indistinguishable from viewing the corresponding real-world object. Up to now two main obstacles have prevented large-screen Computer-Generated Holograms (CGH) from achieving a satisfactory laboratory prototype not to mention a marketable one. The reason is a small cell pitch CGH resulting in a huge number of hologram cells and a very high computational load for encoding the CGH. These seemingly inevitable technological hurdles for a long time have not been cleared limiting the use of holography to special applications, such as optical filtering, interference, beam forming, digital holography for capturing the 3-D shape of objects, and others. SeeReal Technologies has developed a new approach for real-time capable CGH using the socalled Tracked Viewing Windows technology to overcome these problems. The paper will show that today's state of the art reconfigurable Spatial Light Modulators (SLM), especially today's feasible LCD panels are suited for reconstructing large 3-D scenes which can be observed from large viewing angles. For this to achieve the original holographic concept of containing information from the entire scene in each part of the CGH has been abandoned. This substantially reduces the hologram resolution and thus the computational load by several orders of magnitude making thus real-time computation possible. A monochrome real-time prototype measuring 20 inches has been built and demonstrated at last year's SID conference and exhibition 2007 and at several other events.

  11. View generated database

    NASA Technical Reports Server (NTRS)

    Downward, James G.

    1992-01-01

    This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics.

  12. 220GHz wideband 3D imaging radar for concealed object detection technology development and phenomenology studies

    NASA Astrophysics Data System (ADS)

    Robertson, Duncan A.; Macfarlane, David G.; Bryllert, Tomas

    2016-05-01

    We present a 220 GHz 3D imaging `Pathfinder' radar developed within the EU FP7 project CONSORTIS (Concealed Object Stand-Off Real-Time Imaging for Security) which has been built to address two objectives: (i) to de-risk the radar hardware development and (ii) to enable the collection of phenomenology data with ~1 cm3 volumetric resolution. The radar combines a DDS-based chirp generator and self-mixing multiplier technology to achieve a 30 GHz bandwidth chirp with such high linearity that the raw point response is close to ideal and only requires minor nonlinearity compensation. The single transceiver is focused with a 30 cm lens mounted on a gimbal to acquire 3D volumetric images of static test targets and materials.

  13. VIRTOPSY--scientific documentation, reconstruction and animation in forensic: individual and real 3D data based geo-metric approach including optical body/object surface and radiological CT/MRI scanning.

    PubMed

    Thali, Michael J; Braun, Marcel; Buck, Ursula; Aghayev, Emin; Jackowski, Christian; Vock, Peter; Sonnenschein, Martin; Dirnhofer, Richard

    2005-03-01

    Until today, most of the documentation of forensic relevant medical findings is limited to traditional 2D photography, 2D conventional radiographs, sketches and verbal description. There are still some limitations of the classic documentation in forensic science especially if a 3D documentation is necessary. The goal of this paper is to demonstrate new 3D real data based geo-metric technology approaches. This paper present approaches to a 3D geo-metric documentation of injuries on the body surface and internal injuries in the living and deceased cases. Using modern imaging methods such as photogrammetry, optical surface and radiological CT/MRI scanning in combination it could be demonstrated that a real, full 3D data based individual documentation of the body surface and internal structures is possible in a non-invasive and non-destructive manner. Using the data merging/fusing and animation possibilities, it is possible to answer reconstructive questions of the dynamic development of patterned injuries (morphologic imprints) and to evaluate the possibility, that they are matchable or linkable to suspected injury-causing instruments. For the first time, to our knowledge, the method of optical and radiological 3D scanning was used to document the forensic relevant injuries of human body in combination with vehicle damages. By this complementary documentation approach, individual forensic real data based analysis and animation were possible linking body injuries to vehicle deformations or damages. These data allow conclusions to be drawn for automobile accident research, optimization of vehicle safety (pedestrian and passenger) and for further development of crash dummies. Real 3D data based documentation opens a new horizon for scientific reconstruction and animation by bringing added value and a real quality improvement in forensic science.

  14. Telerobotic Haptic Exploration in Art Galleries and Museums for Individuals with Visual Impairments.

    PubMed

    Park, Chung Hyuk; Ryu, Eun-Seok; Howard, Ayanna M

    2015-01-01

    This paper presents a haptic telepresence system that enables visually impaired users to explore locations with rich visual observation such as art galleries and museums by using a telepresence robot, a RGB-D sensor (color and depth camera), and a haptic interface. The recent improvement on RGB-D sensors has enabled real-time access to 3D spatial information in the form of point clouds. However, the real-time representation of this data in the form of tangible haptic experience has not been challenged enough, especially in the case of telepresence for individuals with visual impairments. Thus, the proposed system addresses the real-time haptic exploration of remote 3D information through video encoding and real-time 3D haptic rendering of the remote real-world environment. This paper investigates two scenarios in haptic telepresence, i.e., mobile navigation and object exploration in a remote environment. Participants with and without visual impairments participated in our experiments based on the two scenarios, and the system performance was validated. In conclusion, the proposed framework provides a new methodology of haptic telepresence for individuals with visual impairments by providing an enhanced interactive experience where they can remotely access public places (art galleries and museums) with the aid of haptic modality and robotic telepresence.

  15. The holographic display of three-dimensional medical objects through the usage of a shiftable cylindrical lens

    NASA Astrophysics Data System (ADS)

    Teng, Dongdong; Liu, Lilin; Zhang, Yueli; Pang, Zhiyong; Wang, Biao

    2014-09-01

    Through the creative usage of a shiftable cylindrical lens, a wide-view-angle holographic display system is developed for medical object display in real three-dimensional (3D) space based on a time-multiplexing method. The two-dimensional (2D) source images for all computer generated holograms (CGHs) needed by the display system are only one group of computerized tomography (CT) or magnetic resonance imaging (MRI) slices from the scanning device. Complicated 3D message reconstruction on the computer is not necessary. A pelvis is taken as the target medical object to demonstrate this method and the obtained horizontal viewing angle reaches 28°.

  16. Embedded, real-time UAV control for improved, image-based 3D scene reconstruction

    Treesearch

    Jean Liénard; Andre Vogs; Demetrios Gatziolis; Nikolay Strigul

    2016-01-01

    Unmanned Aerial Vehicles (UAVs) are already broadly employed for 3D modeling of large objects such as trees and monuments via photogrammetry. The usual workflow includes two distinct steps: image acquisition with UAV and computationally demanding postflight image processing. Insufficient feature overlaps across images is a common shortcoming in post-flight image...

  17. Application of multislice spiral CT for guidance of insertion of thoracic spine pedicle screws: an in vitro study.

    PubMed

    Wang, Juan; Zhou, Yicheng; Hu, Ning; Wang, Renfa

    2006-01-01

    To investigate the value of the guidance of three dimensional (3-D) reconstruction of multi-slice spiral CT (MSCT) for the placement of pedicle screws, the 3-D anatomical data of the thoracic pedicles were measured by MSCT in two embalmed human cadaveric thoracic pedicles spines (T1-T10) to guide the insertion of pedicle screws. After pulling the screws out, the pathways were filled with contrast media. The PW, PH, TSA and SSA of developed pathways were measured on the CT images and they were also measured on the real objects by caliper and goniometer. Analysis of variance demonstrated that the difference between the CT scans and real objects had no statistical significance (P > 0.05). Moreover, the difference between pedicle axis and developed pathway also had no statistical significance (P > 0.05). The data obtained from 3-D reconstruction of MSCT demonstrated that individualized standards, are not only accurate but also helpful for the successful placement of pedicle screws.

  18. New technique for real-time distortion-invariant multiobject recognition and classification

    NASA Astrophysics Data System (ADS)

    Hong, Rutong; Li, Xiaoshun; Hong, En; Wang, Zuyi; Wei, Hongan

    2001-04-01

    A real-time hybrid distortion-invariant OPR system was established to make 3D multiobject distortion-invariant automatic pattern recognition. Wavelet transform technique was used to make digital preprocessing of the input scene, to depress the noisy background and enhance the recognized object. A three-layer backpropagation artificial neural network was used in correlation signal post-processing to perform multiobject distortion-invariant recognition and classification. The C-80 and NOA real-time processing ability and the multithread programming technology were used to perform high speed parallel multitask processing and speed up the post processing rate to ROIs. The reference filter library was constructed for the distortion version of 3D object model images based on the distortion parameter tolerance measuring as rotation, azimuth and scale. The real-time optical correlation recognition testing of this OPR system demonstrates that using the preprocessing, post- processing, the nonlinear algorithm os optimum filtering, RFL construction technique and the multithread programming technology, a high possibility of recognition and recognition rate ere obtained for the real-time multiobject distortion-invariant OPR system. The recognition reliability and rate was improved greatly. These techniques are very useful to automatic target recognition.

  19. Evaluation of the Intel RealSense SR300 camera for image-guided interventions and application in vertebral level localization

    NASA Astrophysics Data System (ADS)

    House, Rachael; Lasso, Andras; Harish, Vinyas; Baum, Zachary; Fichtinger, Gabor

    2017-03-01

    PURPOSE: Optical pose tracking of medical instruments is often used in image-guided interventions. Unfortunately, compared to commonly used computing devices, optical trackers tend to be large, heavy, and expensive devices. Compact 3D vision systems, such as Intel RealSense cameras can capture 3D pose information at several magnitudes lower cost, size, and weight. We propose to use Intel SR300 device for applications where it is not practical or feasible to use conventional trackers and limited range and tracking accuracy is acceptable. We also put forward a vertebral level localization application utilizing the SR300 to reduce risk of wrong-level surgery. METHODS: The SR300 was utilized as an object tracker by extending the PLUS toolkit to support data collection from RealSense cameras. Accuracy of the camera was tested by comparing to a high-accuracy optical tracker. CT images of a lumbar spine phantom were obtained and used to create a 3D model in 3D Slicer. The SR300 was used to obtain a surface model of the phantom. Markers were attached to the phantom and a pointer and tracked using Intel RealSense SDK's built-in object tracking feature. 3D Slicer was used to align CT image with phantom using landmark registration and display the CT image overlaid on the optical image. RESULTS: Accuracy of the camera yielded a median position error of 3.3mm (95th percentile 6.7mm) and orientation error of 1.6° (95th percentile 4.3°) in a 20x16x10cm workspace, constantly maintaining proper marker orientation. The model and surface correctly aligned demonstrating the vertebral level localization application. CONCLUSION: The SR300 may be usable for pose tracking in medical procedures where limited accuracy is acceptable. Initial results suggest the SR300 is suitable for vertebral level localization.

  20. Color Constancy in Two-Dimensional and Three-Dimensional Scenes: Effects of Viewing Methods and Surface Texture.

    PubMed

    Morimoto, Takuma; Mizokami, Yoko; Yaguchi, Hirohisa; Buck, Steven L

    2017-01-01

    There has been debate about how and why color constancy may be better in three-dimensional (3-D) scenes than in two-dimensional (2-D) scenes. Although some studies have shown better color constancy for 3-D conditions, the role of specific cues remains unclear. In this study, we compared color constancy for a 3-D miniature room (a real scene consisting of actual objects) and 2-D still images of that room presented on a monitor using three viewing methods: binocular viewing, monocular viewing, and head movement. We found that color constancy was better for the 3-D room; however, color constancy for the 2-D image improved when the viewing method caused the scene to be perceived more like a 3-D scene. Separate measurements of the perceptual 3-D effect of each viewing method also supported these results. An additional experiment comparing a miniature room and its image with and without texture suggested that surface texture of scene objects contributes to color constancy.

  1. 3D Visualization Development of SIUE Campus

    NASA Astrophysics Data System (ADS)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  2. Techniques for the rapid display and manipulation of 3-D biomedical data.

    PubMed

    Goldwasser, S M; Reynolds, R A; Talton, D A; Walsh, E S

    1988-01-01

    The use of fully interactive 3-D workstations with true real-time performance will become increasingly common as technology matures and economical commercial systems become available. This paper provides a comprehensive introduction to high speed approaches to the display and manipulation of 3-D medical objects obtained from tomographic data acquisition systems such as CT, MR, and PET. A variety of techniques are outlined including the use of software on conventional minicomputers, hardware assist devices such as array processors and programmable frame buffers, and special purpose computer architecture for dedicated high performance systems. While both algorithms and architectures are addressed, the major theme centers around the utilization of hardware-based approaches including parallel processors for the implementation of true real-time systems.

  3. The implementation of contour-based object orientation estimation algorithm in FPGA-based on-board vision system

    NASA Astrophysics Data System (ADS)

    Alpatov, Boris; Babayan, Pavel; Ershov, Maksim; Strotov, Valery

    2016-10-01

    This paper describes the implementation of the orientation estimation algorithm in FPGA-based vision system. An approach to estimate an orientation of objects lacking axial symmetry is proposed. Suggested algorithm is intended to estimate orientation of a specific known 3D object based on object 3D model. The proposed orientation estimation algorithm consists of two stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.

  4. OB3D, a new set of 3D objects available for research: a web-based study

    PubMed Central

    Buffat, Stéphane; Chastres, Véronique; Bichot, Alain; Rider, Delphine; Benmussa, Frédéric; Lorenceau, Jean

    2014-01-01

    Studying object recognition is central to fundamental and clinical research on cognitive functions but suffers from the limitations of the available sets that cannot always be modified and adapted to meet the specific goals of each study. We here present a new set of 3D scans of real objects available on-line as ASCII files, OB3D. These files are lists of dots, each defined by a triplet of spatial coordinates and their normal that allow simple and highly versatile transformations and adaptations. We performed a web-based experiment to evaluate the minimal number of dots required for the denomination and categorization of these objects, thus providing a reference threshold. We further analyze several other variables derived from this data set, such as the correlations with object complexity. This new stimulus set, which was found to activate the Lower Occipital Complex (LOC) in another study, may be of interest for studies of cognitive functions in healthy participants and patients with cognitive impairments, including visual perception, language, memory, etc. PMID:25339920

  5. Simulated and Real Sheet-of-Light 3D Object Scanning Using a-Si:H Thin Film PSD Arrays.

    PubMed

    Contreras, Javier; Tornero, Josep; Ferreira, Isabel; Martins, Rodrigo; Gomes, Luis; Fortunato, Elvira

    2015-11-30

    A MATLAB/SIMULINK software simulation model (structure and component blocks) has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector) array concept technology before it is further expanded or developed. This simulation allows changing most of its parameters, such as the number of elements in the PSD array, the direction of vision, the viewing/scanning angle, the object rotation, translation, sample/scan/simulation time, etc. In addition, results show for the first time the possibility of scanning an object in 3D when using an a-Si:H thin film 128 PSD array sensor and hardware/software system. Moreover, this sensor technology is able to perform these scans and render 3D objects at high speeds and high resolutions when using a sheet-of-light laser within a triangulation platform. As shown by the simulation, a substantial enhancement in 3D object profile image quality and realism can be achieved by increasing the number of elements of the PSD array sensor as well as by achieving an optimal position response from the sensor since clearly the definition of the 3D object profile depends on the correct and accurate position response of each detector as well as on the size of the PSD array.

  6. Dynamic representation of 3D auditory space in the midbrain of the free-flying echolocating bat

    PubMed Central

    2018-01-01

    Essential to spatial orientation in the natural environment is a dynamic representation of direction and distance to objects. Despite the importance of 3D spatial localization to parse objects in the environment and to guide movement, most neurophysiological investigations of sensory mapping have been limited to studies of restrained subjects, tested with 2D, artificial stimuli. Here, we show for the first time that sensory neurons in the midbrain superior colliculus (SC) of the free-flying echolocating bat encode 3D egocentric space, and that the bat’s inspection of objects in the physical environment sharpens tuning of single neurons, and shifts peak responses to represent closer distances. These findings emerged from wireless neural recordings in free-flying bats, in combination with an echo model that computes the animal’s instantaneous stimulus space. Our research reveals dynamic 3D space coding in a freely moving mammal engaged in a real-world navigation task. PMID:29633711

  7. The new generation of OpenGL support in ROOT

    NASA Astrophysics Data System (ADS)

    Tadel, M.

    2008-07-01

    OpenGL has been promoted to become the main 3D rendering engine of the ROOT framework. This required a major re-modularization of OpenGL support on all levels, from basic window-system specific interface to medium-level object-representation and top-level scene management. This new architecture allows seamless integration of external scene-graph libraries into the ROOT OpenGL viewer as well as inclusion of ROOT 3D scenes into external GUI and OpenGL-based 3D-rendering frameworks. Scene representation was removed from inside of the viewer, allowing scene-data to be shared among several viewers and providing for a natural implementation of multi-view canvas layouts. The object-graph traversal infrastructure allows free mixing of 3D and 2D-pad graphics and makes implementation of ROOT canvas in pure OpenGL possible. Scene-elements representing ROOT objects trigger automatic instantiation of user-provided rendering-objects based on the dictionary information and class-naming convention. Additionally, a finer, per-object control over scene-updates is available to the user, allowing overhead-free maintenance of dynamic 3D scenes and creation of complex real-time animations. User-input handling was modularized as well, making it easy to support application-specific scene navigation, selection handling and tool management.

  8. Special effects used in creating 3D animated scenes-part 1

    NASA Astrophysics Data System (ADS)

    Avramescu, A. M.

    2015-11-01

    In present, with the help of computer, we can create special effects that look so real that we almost don't perceive them as being different. These special effects are somehow hard to differentiate from the real elements like those on the screen. With the increasingly accesible 3D field that has more and more areas of application, the 3D technology goes easily from architecture to product designing. Real like 3D animations are used as means of learning, for multimedia presentations of big global corporations, for special effects and even for virtual actors in movies. Technology, as part of the movie art, is considered a prerequisite but the cinematography is the first art that had to wait for the correct intersection of technological development, innovation and human vision in order to attain full achievement. Increasingly more often, the majority of industries is using 3D sequences (three dimensional). 3D represented graphics, commercials and special effects from movies are all designed in 3D. The key for attaining real visual effects is to successfully combine various distinct elements: characters, objects, images and video scenes; like all these elements represent a whole that works in perfect harmony. This article aims to exhibit a game design from these days. Considering the advanced technology and futuristic vision of designers, nowadays we have different and multifarious game models. Special effects are decisively contributing in the creation of a realistic three-dimensional scene. These effects are essential for transmitting the emotional state of the scene. Creating the special effects is a work of finesse in order to achieve high quality scenes. Special effects can be used to get the attention of the onlooker on an object from a scene. Out of the conducted study, the best-selling game of the year 2010 was Call of Duty: Modern Warfare 2. This way, the article aims for the presented scene to be similar with many locations from this type of games, more precisely, a place from the Middle East, a very popular subject among game developers.

  9. Interactive X-ray and proton therapy training and simulation.

    PubMed

    Hamza-Lup, Felix G; Farrar, Shane; Leon, Erik

    2015-10-01

    External beam X-ray therapy (XRT) and proton therapy (PT) are effective and widely accepted forms of treatment for many types of cancer. However, the procedures require extensive computerized planning. Current planning systems for both XRT and PT have insufficient visual aid to combine real patient data with the treatment device geometry to account for unforeseen collisions among system components and the patient. The 3D surface representation (S-rep) is a widely used scheme to create 3D models of physical objects. 3D S-reps have been successfully used in CAD/CAM and, in conjunction with texture mapping, in the modern gaming industry to customize avatars and improve the gaming realism and sense of presence. We are proposing a cost-effective method to extract patient-specific S-reps in real time and combine them with the treatment system geometry to provide a comprehensive simulation of the XRT/PT treatment room. The X3D standard is used to implement and deploy the simulator on the web, enabling its use not only for remote specialists' collaboration, simulation, and training, but also for patient education. An objective assessment of the accuracy of the S-reps obtained proves the potential of the simulator for clinical use.

  10. Accurate estimation of object location in an image sequence using helicopter flight data

    NASA Technical Reports Server (NTRS)

    Tang, Yuan-Liang; Kasturi, Rangachar

    1994-01-01

    In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.

  11. High-accuracy and real-time 3D positioning, tracking system for medical imaging applications based on 3D digital image correlation

    NASA Astrophysics Data System (ADS)

    Xue, Yuan; Cheng, Teng; Xu, Xiaohai; Gao, Zeren; Li, Qianqian; Liu, Xiaojing; Wang, Xing; Song, Rui; Ju, Xiangyang; Zhang, Qingchuan

    2017-01-01

    This paper presents a system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications. Traditional DIC method was improved to meet the requirements of the real-time by simplifying the computations of integral pixel search. Experiments were carried out and the results indicated that the new method improved the computational efficiency by about 4-10 times in comparison with the traditional DIC method. The system was aimed for orthognathic surgery navigation in order to track the maxilla segment after LeFort I osteotomy. Experiments showed noise for the static point was at the level of 10-3 mm and the measurement accuracy was 0.009 mm. The system was demonstrated on skin surface shape evaluation of a hand for finger stretching exercises, which indicated a great potential on tracking muscle and skin movements.

  12. Real-time classification of vehicles by type within infrared imagery

    NASA Astrophysics Data System (ADS)

    Kundegorski, Mikolaj E.; Akçay, Samet; Payen de La Garanderie, Grégoire; Breckon, Toby P.

    2016-10-01

    Real-time classification of vehicles into sub-category types poses a significant challenge within infra-red imagery due to the high levels of intra-class variation in thermal vehicle signatures caused by aspects of design, current operating duration and ambient thermal conditions. Despite these challenges, infra-red sensing offers significant generalized target object detection advantages in terms of all-weather operation and invariance to visual camouflage techniques. This work investigates the accuracy of a number of real-time object classification approaches for this task within the wider context of an existing initial object detection and tracking framework. Specifically we evaluate the use of traditional feature-driven bag of visual words and histogram of oriented gradient classification approaches against modern convolutional neural network architectures. Furthermore, we use classical photogrammetry, within the context of current target detection and classification techniques, as a means of approximating 3D target position within the scene based on this vehicle type classification. Based on photogrammetric estimation of target position, we then illustrate the use of regular Kalman filter based tracking operating on actual 3D vehicle trajectories. Results are presented using a conventional thermal-band infra-red (IR) sensor arrangement where targets are tracked over a range of evaluation scenarios.

  13. Real-Time 3D Tracking and Reconstruction on Mobile Phones.

    PubMed

    Prisacariu, Victor Adrian; Kähler, Olaf; Murray, David W; Reid, Ian D

    2015-05-01

    We present a novel framework for jointly tracking a camera in 3D and reconstructing the 3D model of an observed object. Due to the region based approach, our formulation can handle untextured objects, partial occlusions, motion blur, dynamic backgrounds and imperfect lighting. Our formulation also allows for a very efficient implementation which achieves real-time performance on a mobile phone, by running the pose estimation and the shape optimisation in parallel. We use a level set based pose estimation but completely avoid the, typically required, explicit computation of a global distance. This leads to tracking rates of more than 100 Hz on a desktop PC and 30 Hz on a mobile phone. Further, we incorporate additional orientation information from the phone's inertial sensor which helps us resolve the tracking ambiguities inherent to region based formulations. The reconstruction step first probabilistically integrates 2D image statistics from selected keyframes into a 3D volume, and then imposes coherency and compactness using a total variational regularisation term. The global optimum of the overall energy function is found using a continuous max-flow algorithm and we show that, similar to tracking, the integration of per voxel posteriors instead of likelihoods improves the precision and accuracy of the reconstruction.

  14. Tensor body: real-time reconstruction of the human body and avatar synthesis from RGB-D.

    PubMed

    Barmpoutis, Angelos

    2013-10-01

    Real-time 3-D reconstruction of the human body has many applications in anthropometry, telecommunications, gaming, fashion, and other areas of human-computer interaction. In this paper, a novel framework is presented for reconstructing the 3-D model of the human body from a sequence of RGB-D frames. The reconstruction is performed in real time while the human subject moves arbitrarily in front of the camera. The method employs a novel parameterization of cylindrical-type objects using Cartesian tensor and b-spline bases along the radial and longitudinal dimension respectively. The proposed model, dubbed tensor body, is fitted to the input data using a multistep framework that involves segmentation of the different body regions, robust filtering of the data via a dynamic histogram, and energy-based optimization with positive-definite constraints. A Riemannian metric on the space of positive-definite tensor splines is analytically defined and employed in this framework. The efficacy of the presented methods is demonstrated in several real-data experiments using the Microsoft Kinect sensor.

  15. The effects of surface gloss and roughness on color constancy for real 3-D objects.

    PubMed

    Granzier, Jeroen J M; Vergne, Romain; Gegenfurtner, Karl R

    2014-02-21

    Color constancy denotes the phenomenon that the appearance of an object remains fairly stable under changes in illumination and background color. Most of what we know about color constancy comes from experiments using flat, matte surfaces placed on a single plane under diffuse illumination simulated on a computer monitor. Here we investigate whether material properties (glossiness and roughness) have an effect on color constancy for real objects. Subjects matched the color and brightness of cylinders (painted red, green, or blue) illuminated by simulated daylight (D65) or by a reddish light with a Munsell color book illuminated by a tungsten lamp. The cylinders were either glossy or matte and either smooth or rough. The object was placed in front of a black background or a colored checkerboard. We found that color constancy was significantly higher for the glossy objects compared to the matte objects, and higher for the smooth objects compared to the rough objects. This was independent of the background. We conclude that material properties like glossiness and roughness can have significant effects on color constancy.

  16. Projector-Based Augmented Reality for Quality Inspection of Scanned Objects

    NASA Astrophysics Data System (ADS)

    Kern, J.; Weinmann, M.; Wursthorn, S.

    2017-09-01

    After scanning or reconstructing the geometry of objects, we need to inspect the result of our work. Are there any parts missing? Is every detail covered in the desired quality? We typically do this by looking at the resulting point clouds or meshes of our objects on-screen. What, if we could see the information directly visualized on the object itself? Augmented reality is the generic term for bringing virtual information into our real environment. In our paper, we show how we can project any 3D information like thematic visualizations or specific monitoring information with reference to our object onto the object's surface itself, thus augmenting it with additional information. For small objects that could for instance be scanned in a laboratory, we propose a low-cost method involving a projector-camera system to solve this task. The user only needs a calibration board with coded fiducial markers to calibrate the system and to estimate the projector's pose later on for projecting textures with information onto the object's surface. Changes within the projected 3D information or of the projector's pose will be applied in real-time. Our results clearly reveal that such a simple setup will deliver a good quality of the augmented information.

  17. Customised 3D Printing: An Innovative Training Tool for the Next Generation of Orbital Surgeons.

    PubMed

    Scawn, Richard L; Foster, Alex; Lee, Bradford W; Kikkawa, Don O; Korn, Bobby S

    2015-01-01

    Additive manufacturing or 3D printing is the process by which three dimensional data fields are translated into real-life physical representations. 3D printers create physical printouts using heated plastics in a layered fashion resulting in a three-dimensional object. We present a technique for creating customised, inexpensive 3D orbit models for use in orbital surgical training using 3D printing technology. These models allow trainee surgeons to perform 'wet-lab' orbital decompressions and simulate upcoming surgeries on orbital models that replicate a patient's bony anatomy. We believe this represents an innovative training tool for the next generation of orbital surgeons.

  18. Lagrangian 3D tracking of fluorescent microscopic objects in motion

    NASA Astrophysics Data System (ADS)

    Darnige, T.; Figueroa-Morales, N.; Bohec, P.; Lindner, A.; Clément, E.

    2017-05-01

    We describe the development of a tracking device, mounted on an epi-fluorescent inverted microscope, suited to obtain time resolved 3D Lagrangian tracks of fluorescent passive or active micro-objects in microfluidic devices. The system is based on real-time image processing, determining the displacement of a x, y mechanical stage to keep the chosen object at a fixed position in the observation frame. The z displacement is based on the refocusing of the fluorescent object determining the displacement of a piezo mover keeping the moving object in focus. Track coordinates of the object with respect to the microfluidic device as well as images of the object are obtained at a frequency of several tenths of Hertz. This device is particularly well adapted to obtain trajectories of motile micro-organisms in microfluidic devices with or without flow.

  19. Lagrangian 3D tracking of fluorescent microscopic objects in motion.

    PubMed

    Darnige, T; Figueroa-Morales, N; Bohec, P; Lindner, A; Clément, E

    2017-05-01

    We describe the development of a tracking device, mounted on an epi-fluorescent inverted microscope, suited to obtain time resolved 3D Lagrangian tracks of fluorescent passive or active micro-objects in microfluidic devices. The system is based on real-time image processing, determining the displacement of a x, y mechanical stage to keep the chosen object at a fixed position in the observation frame. The z displacement is based on the refocusing of the fluorescent object determining the displacement of a piezo mover keeping the moving object in focus. Track coordinates of the object with respect to the microfluidic device as well as images of the object are obtained at a frequency of several tenths of Hertz. This device is particularly well adapted to obtain trajectories of motile micro-organisms in microfluidic devices with or without flow.

  20. Geospatial Database for Strata Objects Based on Land Administration Domain Model (ladm)

    NASA Astrophysics Data System (ADS)

    Nasorudin, N. N.; Hassan, M. I.; Zulkifli, N. A.; Rahman, A. Abdul

    2016-09-01

    Recently in our country, the construction of buildings become more complex and it seems that strata objects database becomes more important in registering the real world as people now own and use multilevel of spaces. Furthermore, strata title was increasingly important and need to be well-managed. LADM is a standard model for land administration and it allows integrated 2D and 3D representation of spatial units. LADM also known as ISO 19152. The aim of this paper is to develop a strata objects database using LADM. This paper discusses the current 2D geospatial database and needs for 3D geospatial database in future. This paper also attempts to develop a strata objects database using a standard data model (LADM) and to analyze the developed strata objects database using LADM data model. The current cadastre system in Malaysia includes the strata title is discussed in this paper. The problems in the 2D geospatial database were listed and the needs for 3D geospatial database in future also is discussed. The processes to design a strata objects database are conceptual, logical and physical database design. The strata objects database will allow us to find the information on both non-spatial and spatial strata title information thus shows the location of the strata unit. This development of strata objects database may help to handle the strata title and information.

  1. Three Dimensional (3D) Printing: A Straightforward, User-Friendly Protocol to Convert Virtual Chemical Models to Real-Life Objects

    ERIC Educational Resources Information Center

    Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel

    2015-01-01

    A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…

  2. Medical image segmentation using 3D MRI data

    NASA Astrophysics Data System (ADS)

    Voronin, V.; Marchuk, V.; Semenishchev, E.; Cen, Yigang; Agaian, S.

    2017-05-01

    Precise segmentation of three-dimensional (3D) magnetic resonance imaging (MRI) image can be a very useful computer aided diagnosis (CAD) tool in clinical routines. Accurate automatic extraction a 3D component from images obtained by magnetic resonance imaging (MRI) is a challenging segmentation problem due to the small size objects of interest (e.g., blood vessels, bones) in each 2D MRA slice and complex surrounding anatomical structures. Our objective is to develop a specific segmentation scheme for accurately extracting parts of bones from MRI images. In this paper, we use a segmentation algorithm to extract the parts of bones from Magnetic Resonance Imaging (MRI) data sets based on modified active contour method. As a result, the proposed method demonstrates good accuracy in a comparison between the existing segmentation approaches on real MRI data.

  3. 3D augmented reality with integral imaging display

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  4. 3D image display of fetal ultrasonic images by thin shell

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

    1999-05-01

    Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

  5. High resolution, wide field of view, real time 340GHz 3D imaging radar for security screening

    NASA Astrophysics Data System (ADS)

    Robertson, Duncan A.; Macfarlane, David G.; Hunter, Robert I.; Cassidy, Scott L.; Llombart, Nuria; Gandini, Erio; Bryllert, Tomas; Ferndahl, Mattias; Lindström, Hannu; Tenhunen, Jussi; Vasama, Hannu; Huopana, Jouni; Selkälä, Timo; Vuotikka, Antti-Jussi

    2017-05-01

    The EU FP7 project CONSORTIS (Concealed Object Stand-Off Real-Time Imaging for Security) is developing a demonstrator system for next generation airport security screening which will combine passive and active submillimeter wave imaging sensors. We report on the development of the 340 GHz 3D imaging radar which achieves high volumetric resolution over a wide field of view with high dynamic range and a high frame rate. A sparse array of 16 radar transceivers is coupled with high speed mechanical beam scanning to achieve a field of view of 1 x 1 x 1 m3 and a 10 Hz frame rate.

  6. MO-A-9A-01: Innovation in Medical Physics Practice: 3D Printing Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ehler, E; Perks, J; Rasmussen, K

    2014-06-15

    3D printing, also called additive manufacturing, has great potential to advance the field of medicine. Many medical uses have been exhibited from facial reconstruction to the repair of pulmonary obstructions. The strength of 3D printing is to quickly convert a 3D computer model into a physical object. Medical use of 3D models is already ubiquitous with technologies such as computed tomography and magnetic resonance imaging. Thus tailoring 3D printing technology to medical functions has the potential to impact patient care. This session will discuss applications to the field of Medical Physics. Topics discussed will include introduction to 3D printing methodsmore » as well as examples of real-world uses of 3D printing spanning clinical and research practice in diagnostic imaging and radiation therapy. The session will also compare 3D printing to other manufacturing processes and discuss a variety of uses of 3D printing technology outside the field of Medical Physics. Learning Objectives: Understand the technologies available for 3D Printing Understand methods to generate 3D models Identify the benefits and drawbacks to rapid prototyping / 3D Printing Understand the potential issues related to clinical use of 3D Printing.« less

  7. Color Constancy in Two-Dimensional and Three-Dimensional Scenes: Effects of Viewing Methods and Surface Texture

    PubMed Central

    Morimoto, Takuma; Mizokami, Yoko; Yaguchi, Hirohisa; Buck, Steven L.

    2017-01-01

    There has been debate about how and why color constancy may be better in three-dimensional (3-D) scenes than in two-dimensional (2-D) scenes. Although some studies have shown better color constancy for 3-D conditions, the role of specific cues remains unclear. In this study, we compared color constancy for a 3-D miniature room (a real scene consisting of actual objects) and 2-D still images of that room presented on a monitor using three viewing methods: binocular viewing, monocular viewing, and head movement. We found that color constancy was better for the 3-D room; however, color constancy for the 2-D image improved when the viewing method caused the scene to be perceived more like a 3-D scene. Separate measurements of the perceptual 3-D effect of each viewing method also supported these results. An additional experiment comparing a miniature room and its image with and without texture suggested that surface texture of scene objects contributes to color constancy. PMID:29238513

  8. Cognitive-Developmental Learning for a Humanoid Robot: A Caregiver’s Gift

    DTIC Science & Technology

    2004-05-01

    system . We propose a real- time algorithm to infer depth and build 3-dimensional coarse maps for objects through the analysis of cues provided by an... system is well defined at the boundary of these regions (although the derivatives are not). A time domain analysis is presented for a piece-linear... Analysis of Multivariable Systems ......................... 266 D.3.1 Networks of Multiple Neural Oscillators ................. 266 D.3.2 Networks of

  9. Real-time tricolor phase measuring profilometry based on CCD sensitivity calibration

    NASA Astrophysics Data System (ADS)

    Zhu, Lin; Cao, Yiping; He, Dawu; Chen, Cheng

    2017-02-01

    A real-time tricolor phase measuring profilometry (RTPMP) based on charge coupled device (CCD) sensitivity calibration is proposed. Only one colour fringe pattern whose red (R), green (G) and blue (B) components are, respectively, coded as three sinusoidal phase-shifting gratings with an equivalent shifting phase of 2π/3 is needed and sent to an appointed flash memory on a specialized digital light projector (SDLP). A specialized time-division multiplexing timing sequence actively controls the SDLP to project the fringe patterns in R, G and B channels sequentially onto the measured object in one over seventy-two of a second and meanwhile actively controls a high frame rate monochrome CCD camera to capture the corresponding deformed patterns synchronously with the SDLP. So the sufficient information for reconstructing the three-dimensional (3D) shape in one over twenty-four of a second is obtained. Due to the different spectral sensitivity of the CCD camera to RGB lights, the captured deformed patterns from R, G and B channels cannot share the same peak and valley, which will lead to lower accuracy or even failing to reconstruct the 3D shape. So a deformed pattern amending method based on CCD sensitivity calibration is developed to guarantee the accurate 3D reconstruction. The experimental results verify the feasibility of the proposed RTPMP method. The proposed RTPMP method can obtain the 3D shape at over the video frame rate of 24 frames per second, avoid the colour crosstalk completely and be effective for measuring real-time changing object.

  10. Measurable realistic image-based 3D mapping

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wang, J.; Wang, J. J.; Ding, W.; Almagbile, A.

    2011-12-01

    Maps with 3D visual models are becoming a remarkable feature of 3D map services. High-resolution image data is obtained for the construction of 3D visualized models.The3D map not only provides the capabilities of 3D measurements and knowledge mining, but also provides the virtual experienceof places of interest, such as demonstrated in the Google Earth. Applications of 3D maps are expanding into the areas of architecture, property management, and urban environment monitoring. However, the reconstruction of high quality 3D models is time consuming, and requires robust hardware and powerful software to handle the enormous amount of data. This is especially for automatic implementation of 3D models and the representation of complicated surfacesthat still need improvements with in the visualisation techniques. The shortcoming of 3D model-based maps is the limitation of detailed coverage since a user can only view and measure objects that are already modelled in the virtual environment. This paper proposes and demonstrates a 3D map concept that is realistic and image-based, that enables geometric measurements and geo-location services. Additionally, image-based 3D maps provide more detailed information of the real world than 3D model-based maps. The image-based 3D maps use geo-referenced stereo images or panoramic images. The geometric relationships between objects in the images can be resolved from the geometric model of stereo images. The panoramic function makes 3D maps more interactive with users but also creates an interesting immersive circumstance. Actually, unmeasurable image-based 3D maps already exist, such as Google street view, but only provide virtual experiences in terms of photos. The topographic and terrain attributes, such as shapes and heights though are omitted. This paper also discusses the potential for using a low cost land Mobile Mapping System (MMS) to implement realistic image 3D mapping, and evaluates the positioning accuracy that a measureable realistic image-based (MRI) system can produce. The major contribution here is the implementation of measurable images on 3D maps to obtain various measurements from real scenes.

  11. Seeing the tipping point: Balance perception and visual shape.

    PubMed

    Firestone, Chaz; Keil, Frank C

    2016-07-01

    In a brief glance at an object or shape, we can appreciate a rich suite of its functional properties, including the organization of the object's parts, its optimal contact points for grasping, and its center of mass, or balancing point. However, in the real world and the laboratory, balance perception shows systematic biases whereby observers may misjudge a shape's center of mass by a severe margin. Are such biases simply quirks of physical reasoning? Or might they instead reflect more fundamental principles of object representation? Here we demonstrate systematically biased center-of-mass estimation for two-dimensional (2D) shapes (Study 1) and advance a surprising explanation of such biases. We suggest that the mind implicitly represents ordinary 2D shapes as rich, volumetric, three-dimensional (3D) objects, and that these "inflated" shape representations intrude on and bias perception of the 2D shape's geometric properties. Such "inflation" is a computer-graphics technique for segmenting shapes into parts, and we show that a model derived from this technique best accounts for the biases in center-of-mass estimation in Study 1. Further supporting this account, we show that reducing the need for inflated shape representations diminishes such biases: Center-of-mass estimation improved when cues to shapehood were attenuated (Study 2) and when shapes' depths were explicitly depicted using real-life objects laser-cut from wood (Study 3). We suggest that the technique of shape inflation is actually implemented in the mind; thus, biases in our impressions of balance reflect a more general functional characteristic of object perception. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. The applicability of holography in forensic identification: a fusion of the traditional optical technique and digital technique.

    PubMed

    Biwasaka, Hitoshi; Saigusa, Kiyoshi; Aoki, Yasuhiro

    2005-03-01

    In this study, the applicability of holography in the 3-dimensional recording of forensic objects such as skulls and mandibulae, and the accuracy of the reconstructed 3-D images, were examined. The virtual holographic image, which records the 3-dimensional data of the original object, is visually observed on the other side of the holographic plate, and reproduces the 3-dimensional shape of the object well. Another type of holographic image, the real image, is focused on a frosted glass screen, and cross-sectional images of the object can be observed. When measuring the distances between anatomical reference points using an image-processing software, the average deviations in the holographic images as compared to the actual objects were less than 0.1 mm. Therefore, holography could be useful as a 3-dimensional recording method of forensic objects. Two superimposition systems using holographic images were examined. In the 2D-3D system, the transparent virtual holographic image of an object is directly superimposed onto the digitized photograph of the same object on the LCD monitor. On the other hand, in the video system, the holographic image captured by the CCD camera is superimposed onto the digitized photographic image using a personal computer. We found that the discrepancy between the outlines of the superimposed holographic and photographic dental images using the video system was smaller than that using the 2D-3D system. Holography seemed to perform comparably to the computer graphic system; however, a fusion with the digital technique would expand the utility of holography in superimposition.

  13. [Development of a software for 3D virtual phantom design].

    PubMed

    Zou, Lian; Xie, Zhao; Wu, Qi

    2014-02-01

    In this paper, we present a 3D virtual phantom design software, which was developed based on object-oriented programming methodology and dedicated to medical physics research. This software was named Magical Phan tom (MPhantom), which is composed of 3D visual builder module and virtual CT scanner. The users can conveniently construct any complex 3D phantom, and then export the phantom as DICOM 3.0 CT images. MPhantom is a user-friendly and powerful software for 3D phantom configuration, and has passed the real scene's application test. MPhantom will accelerate the Monte Carlo simulation for dose calculation in radiation therapy and X ray imaging reconstruction algorithm research.

  14. Applying Augmented Reality in practical classes for engineering students

    NASA Astrophysics Data System (ADS)

    Bazarov, S. E.; Kholodilin, I. Yu; Nesterov, A. S.; Sokhina, A. V.

    2017-10-01

    In this article the Augmented Reality application for teaching engineering students of electrical and technological specialties is introduced. In order to increase the motivation for learning and the independence of students, new practical guidelines on Augmented Reality were developed in the application to practical classes. During the application development, the authors used software such as Unity 3D and Vuforia. The Augmented Reality content consists of 3D-models, images and animations, which are superimposed on real objects, helping students to study specific tasks. A user who has a smartphone, a tablet PC, or Augmented Reality glasses can visualize on-screen virtual objects added to a real environment. Having analyzed the current situation in higher education: the learner’s interest in studying, their satisfaction with the educational process, and the impact of the Augmented Reality application on students, a questionnaire was developed and offered to students; the study involved 24 learners.

  15. Virtual acoustics displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.

    1991-01-01

    The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.

  16. Virtual acoustics displays

    NASA Astrophysics Data System (ADS)

    Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.

    1991-03-01

    The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.

  17. Development of a real time multiple target, multi camera tracker for civil security applications

    NASA Astrophysics Data System (ADS)

    Åkerlund, Hans

    2009-09-01

    A surveillance system has been developed that can use multiple TV-cameras to detect and track personnel and objects in real time in public areas. The document describes the development and the system setup. The system is called NIVS Networked Intelligent Video Surveillance. Persons in the images are tracked and displayed on a 3D map of the surveyed area.

  18. A Comparison of Cognitive Teaching Stimuli in a First Grade Classroom.

    ERIC Educational Resources Information Center

    Sigrest, Christine E.

    A study assessed the effectiveness of three cognitive levels of instruction with first graders--three-dimensional (3-D) instruction using real objects, two-dimensional (2-D) instruction using picture representations, and verbal instruction. The study population included 18 first-grade students between the ages of 6 and 8 at a small elementary city…

  19. Multilayered nonuniform sampling for three-dimensional scene representation

    NASA Astrophysics Data System (ADS)

    Lin, Huei-Yung; Xiao, Yu-Hua; Chen, Bo-Ren

    2015-09-01

    The representation of a three-dimensional (3-D) scene is essential in multiview imaging technologies. We present a unified geometry and texture representation based on global resampling of the scene. A layered data map representation with a distance-dependent nonuniform sampling strategy is proposed. It is capable of increasing the details of the 3-D structure locally and is compact in size. The 3-D point cloud obtained from the multilayered data map is used for view rendering. For any given viewpoint, image synthesis with different levels of detail is carried out using the quadtree-based nonuniformly sampled 3-D data points. Experimental results are presented using the 3-D models of reconstructed real objects.

  20. A near-real-time full-parallax holographic display for remote operations

    NASA Technical Reports Server (NTRS)

    Iavecchia, Helene P.; Huff, Lloyd; Marzwell, Neville I.

    1991-01-01

    A near real-time, full parallax holographic display system was developed that has the potential to provide a 3-D display for remote handling operations in hazardous environments. The major components of the system consist of a stack of three spatial light modulators which serves as the object source of the hologram; a near real-time holographic recording material (such as thermoplastic and photopolymer); and an optical system for relaying SLM images to the holographic recording material and to the observer for viewing.

  1. Efficient Skeletonization of Volumetric Objects.

    PubMed

    Zhou, Yong; Toga, Arthur W

    1999-07-01

    Skeletonization promises to become a powerful tool for compact shape description, path planning, and other applications. However, current techniques can seldom efficiently process real, complicated 3D data sets, such as MRI and CT data of human organs. In this paper, we present an efficient voxel-coding based algorithm for Skeletonization of 3D voxelized objects. The skeletons are interpreted as connected centerlines. consisting of sequences of medial points of consecutive clusters. These centerlines are initially extracted as paths of voxels, followed by medial point replacement, refinement, smoothness, and connection operations. The voxel-coding techniques have been proposed for each of these operations in a uniform and systematic fashion. In addition to preserving basic connectivity and centeredness, the algorithm is characterized by straightforward computation, no sensitivity to object boundary complexity, explicit extraction of ready-to-parameterize and branch-controlled skeletons, and efficient object hole detection. These issues are rarely discussed in traditional methods. A range of 3D medical MRI and CT data sets were used for testing the algorithm, demonstrating its utility.

  2. Recognizing familiar objects by hand and foot: Haptic shape perception generalizes to inputs from unusual locations and untrained body parts.

    PubMed

    Lawson, Rebecca

    2014-02-01

    The limits of generalization of our 3-D shape recognition system to identifying objects by touch was investigated by testing exploration at unusual locations and using untrained effectors. In Experiments 1 and 2, people found identification by hand of real objects, plastic 3-D models of objects, and raised line drawings placed in front of themselves no easier than when exploration was behind their back. Experiment 3 compared one-handed, two-handed, one-footed, and two-footed haptic object recognition of familiar objects. Recognition by foot was slower (7 vs. 13 s) and much less accurate (9 % vs. 47 % errors) than recognition by either one or both hands. Nevertheless, item difficulty was similar across hand and foot exploration, and there was a strong correlation between an individual's hand and foot performance. Furthermore, foot recognition was better with the largest 20 of the 80 items (32 % errors), suggesting that physical limitations hampered exploration by foot. Thus, object recognition by hand generalized efficiently across the spatial location of stimuli, while object recognition by foot seemed surprisingly good given that no prior training was provided. Active touch (haptics) thus efficiently extracts 3-D shape information and accesses stored representations of familiar objects from novel modes of input.

  3. Augmented Virtuality: A Real-time Process for Presenting Real-world Visual Sensory Information in an Immersive Virtual Environment for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    McFadden, D.; Tavakkoli, A.; Regenbrecht, J.; Wilson, B.

    2017-12-01

    Virtual Reality (VR) and Augmented Reality (AR) applications have recently seen an impressive growth, thanks to the advent of commercial Head Mounted Displays (HMDs). This new visualization era has opened the possibility of presenting researchers from multiple disciplines with data visualization techniques not possible via traditional 2D screens. In a purely VR environment researchers are presented with the visual data in a virtual environment, whereas in a purely AR application, a piece of virtual object is projected into the real world with which researchers could interact. There are several limitations to the purely VR or AR application when taken within the context of remote planetary exploration. For example, in a purely VR environment, contents of the planet surface (e.g. rocks, terrain, or other features) should be created off-line from a multitude of images using image processing techniques to generate 3D mesh data that will populate the virtual surface of the planet. This process usually takes a tremendous amount of computational resources and cannot be delivered in real-time. As an alternative, video frames may be superimposed on the virtual environment to save processing time. However, such rendered video frames will lack 3D visual information -i.e. depth information. In this paper, we present a technique to utilize a remotely situated robot's stereoscopic cameras to provide a live visual feed from the real world into the virtual environment in which planetary scientists are immersed. Moreover, the proposed technique will blend the virtual environment with the real world in such a way as to preserve both the depth and visual information from the real world while allowing for the sensation of immersion when the entire sequence is viewed via an HMD such as Oculus Rift. The figure shows the virtual environment with an overlay of the real-world stereoscopic video being presented in real-time into the virtual environment. Notice the preservation of the object's shape, shadows, and depth information. The distortions shown in the image are due to the rendering of the stereoscopic data into a 2D image for the purposes of taking screenshots.

  4. The implementation of aerial object recognition algorithm based on contour descriptor in FPGA-based on-board vision system

    NASA Astrophysics Data System (ADS)

    Babayan, Pavel; Smirnov, Sergey; Strotov, Valery

    2017-10-01

    This paper describes the aerial object recognition algorithm for on-board and stationary vision system. Suggested algorithm is intended to recognize the objects of a specific kind using the set of the reference objects defined by 3D models. The proposed algorithm based on the outer contour descriptor building. The algorithm consists of two stages: learning and recognition. Learning stage is devoted to the exploring of reference objects. Using 3D models we can build the database containing training images by rendering the 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the recognition stage of the algorithm. The recognition stage is focusing on estimating the similarity of the captured object and the reference objects by matching an observed image descriptor and the reference object descriptors. The experimental research was performed using a set of the models of the aircraft of the different types (airplanes, helicopters, UAVs). The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.

  5. MobileFusion: real-time volumetric surface reconstruction and dense tracking on mobile phones.

    PubMed

    Ondrúška, Peter; Kohli, Pushmeet; Izadi, Shahram

    2015-11-01

    We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ∼ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.

  6. Virtual Reality Website of Indonesia National Monument and Its Environment

    NASA Astrophysics Data System (ADS)

    Wardijono, B. A.; Hendajani, F.; Sudiro, S. A.

    2017-02-01

    National Monument (Monumen Nasional) is an Indonesia National Monument building where located in Jakarta. This monument is a symbol of Jakarta and it is a pride monument of the people in Jakarta and Indonesia country. This National Monument also has a museum about the history of the Indonesian country. To provide information to the general public, in this research we created and developed models of 3D graphics from the National Monument and the surrounding environment. Virtual Reality technology was used to display the visualization of the National Monument and the surrounding environment in 3D graphics form. Latest programming technology makes it possible to display 3D objects via the internet browser. This research used Unity3D and WebGL to make virtual reality models that can be implemented and showed on a Website. The result from this research is the development of 3-dimensional Website of the National Monument and its objects surrounding the environment that can be displayed through the Web browser. The virtual reality of whole objects was divided into a number of scenes, so that it can be displayed in real time visualization.

  7. A Proposed Framework for Collaborative Design in a Virtual Environment

    NASA Astrophysics Data System (ADS)

    Breland, Jason S.; Shiratuddin, Mohd Fairuz

    This paper describes a proposed framework for a collaborative design in a virtual environment. The framework consists of components that support a true collaborative design in a real-time 3D virtual environment. In support of the proposed framework, a prototype application is being developed. The authors envision the framework will have, but not limited to the following features: (1) real-time manipulation of 3D objects across the network, (2) support for multi-designer activities and information access, (3) co-existence within same virtual space, etc. This paper also discusses a proposed testing to determine the possible benefits of a collaborative design in a virtual environment over other forms of collaboration, and results from a pilot test.

  8. Precipitation Processes developed during ARM (1997), TOGA COARE (1992), GATE (1974), SCSMEX (1998), and KWAJEX (1999), Consistent 2D, semi-3D and 3D Cloud Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Hou, A.; Atlas, R.; Starr, D.; Sud, Y.

    2003-01-01

    Real clouds and cloud systems are inherently three-dimensional (3D). Because of the limitations in computer resources, however, most cloud-resolving models (CRMs) today are still two-dimensional (2D). A few 3D CRMs have been used to study the response of clouds to large-scale forcing. In these 3D simulations, the model domain was small, and the integration time was 6 hours. The major objectives of this paper are: (1) to assess the performance of the super-parameterization technique (i.e. is 2D or semi-3D CRM appropriate for the super-parameterization?); (2) calculate and examine the surface energy (especially radiation) and water budgets; (3) identify the differences and similarities in the organization and entrainment rates of convection between simulated 2D and 3D cloud systems.

  9. D Model Visualization Enhancements in Real-Time Game Engines

    NASA Astrophysics Data System (ADS)

    Merlo, A.; Sánchez Belenguer, C.; Vendrell Vidal, E.; Fantini, F.; Aliperta, A.

    2013-02-01

    This paper describes two procedures used to disseminate tangible cultural heritage through real-time 3D simulations providing accurate-scientific representations. The main idea is to create simple geometries (with low-poly count) and apply two different texture maps to them: a normal map and a displacement map. There are two ways to achieve models that fit with normal or displacement maps: with the former (normal maps), the number of polygons in the reality-based model may be dramatically reduced by decimation algorithms and then normals may be calculated by rendering them to texture solutions (baking). With the latter, a LOD model is needed; its topology has to be quad-dominant for it to be converted to a good quality subdivision surface (with consistent tangency and curvature all over). The subdivision surface is constructed using methodologies for the construction of assets borrowed from character animation: these techniques have been recently implemented in many entertainment applications known as "retopology". The normal map is used as usual, in order to shade the surface of the model in a realistic way. The displacement map is used to finish, in real-time, the flat faces of the object, by adding the geometric detail missing in the low-poly models. The accuracy of the resulting geometry is progressively refined based on the distance from the viewing point, so the result is like a continuous level of detail, the only difference being that there is no need to create different 3D models for one and the same object. All geometric detail is calculated in real-time according to the displacement map. This approach can be used in Unity, a real-time 3D engine originally designed for developing computer games. It provides a powerful rendering engine, fully integrated with a complete set of intuitive tools and rapid workflows that allow users to easily create interactive 3D contents. With the release of Unity 4.0, new rendering features have been added, including DirectX 11 support. Real-time tessellation is a technique that can be applied by using such technology. Since the displacement and the resulting geometry are calculated by the GPU, the time-based execution cost of this technique is very low.

  10. 3D highway alignment optimization for Brookeville Bypass : final report.

    DOT National Transportation Integrated Search

    2005-06-01

    This study applies the previously developed Highway Alignment Optimization (HAO) : model to the MD 97 Bypass project in Brookeville, Maryland. The objective of this study is to : demonstrate the applicability of the HAO model to a real highway projec...

  11. Three-dimensional imaging of buried objects in very lossy earth by inversion of VETEM data

    USGS Publications Warehouse

    Cui, T.J.; Aydiner, A.A.; Chew, W.C.; Wright, D.L.; Smith, D.V.

    2003-01-01

    The very early time electromagnetic system (VETEM) is an efficient tool for the detection of buried objects in very lossy earth, which allows a deeper penetration depth compared to the ground-penetrating radar. In this paper, the inversion of VETEM data is investigated using three-dimensional (3-D) inverse scattering techniques, where multiple frequencies are applied in the frequency range from 0-5 MHz. For small and moderately sized problems, the Born approximation and/or the Born iterative method have been used with the aid of the singular value decomposition and/or the conjugate gradient method in solving the linearized integral equations. For large-scale problems, a localized 3-D inversion method based on the Born approximation has been proposed for the inversion of VETEM data over a large measurement domain. Ways to process and to calibrate the experimental VETEM data are discussed to capture the real physics of buried objects. Reconstruction examples using synthesized VETEM data and real-world VETEM data are given to test the validity and efficiency of the proposed approach.

  12. Three-dimensional display technologies

    PubMed Central

    Geng, Jason

    2014-01-01

    The physical world around us is three-dimensional (3D), yet traditional display devices can show only two-dimensional (2D) flat images that lack depth (i.e., the third dimension) information. This fundamental restriction greatly limits our ability to perceive and to understand the complexity of real-world objects. Nearly 50% of the capability of the human brain is devoted to processing visual information [Human Anatomy & Physiology (Pearson, 2012)]. Flat images and 2D displays do not harness the brain’s power effectively. With rapid advances in the electronics, optics, laser, and photonics fields, true 3D display technologies are making their way into the marketplace. 3D movies, 3D TV, 3D mobile devices, and 3D games have increasingly demanded true 3D display with no eyeglasses (autostereoscopic). Therefore, it would be very beneficial to readers of this journal to have a systematic review of state-of-the-art 3D display technologies. PMID:25530827

  13. Railway clearance intrusion detection method with binocular stereo vision

    NASA Astrophysics Data System (ADS)

    Zhou, Xingfang; Guo, Baoqing; Wei, Wei

    2018-03-01

    In the stage of railway construction and operation, objects intruding railway clearance greatly threaten the safety of railway operation. Real-time intrusion detection is of great importance. For the shortcomings of depth insensitive and shadow interference of single image method, an intrusion detection method with binocular stereo vision is proposed to reconstruct the 3D scene for locating the objects and judging clearance intrusion. The binocular cameras are calibrated with Zhang Zhengyou's method. In order to improve the 3D reconstruction speed, a suspicious region is firstly determined by background difference method of a single camera's image sequences. The image rectification, stereo matching and 3D reconstruction process are only executed when there is a suspicious region. A transformation matrix from Camera Coordinate System(CCS) to Track Coordinate System(TCS) is computed with gauge constant and used to transfer the 3D point clouds into the TCS, then the 3D point clouds are used to calculate the object position and intrusion in TCS. The experiments in railway scene show that the position precision is better than 10mm. It is an effective way for clearance intrusion detection and can satisfy the requirement of railway application.

  14. Creating photorealistic virtual model with polarization-based vision system

    NASA Astrophysics Data System (ADS)

    Shibata, Takushi; Takahashi, Toru; Miyazaki, Daisuke; Sato, Yoichi; Ikeuchi, Katsushi

    2005-08-01

    Recently, 3D models are used in many fields such as education, medical services, entertainment, art, digital archive, etc., because of the progress of computational time and demand for creating photorealistic virtual model is increasing for higher reality. In computer vision field, a number of techniques have been developed for creating the virtual model by observing the real object in computer vision field. In this paper, we propose the method for creating photorealistic virtual model by using laser range sensor and polarization based image capture system. We capture the range and color images of the object which is rotated on the rotary table. By using the reconstructed object shape and sequence of color images of the object, parameter of a reflection model are estimated in a robust manner. As a result, then, we can make photorealistic 3D model in consideration of surface reflection. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. In separation of reflection components, we use polarization filter. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.

  15. D Modelling and Interactive Web-Based Visualization of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Koeva, M. N.

    2016-06-01

    Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria - a country with thousands of years of history and cultural heritage dating back to ancient civilizations. This motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1) image-based modelling using a non-metric hand-held camera; (2) 3D visualization based on spherical panoramic images; (3) and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This comparative study discusses the advantages and disadvantages of these three approaches and their integration in multiple domains, such as web-based 3D city modelling, tourism and architectural 3D visualization. It was concluded that image-based modelling and panoramic visualisation are simple, fast and effective techniques suitable for simultaneous virtual representation of many objects. However, additional measurements or CAD information will be beneficial for obtaining higher accuracy.

  16. VPython: Writing Real-time 3D Physics Programs

    NASA Astrophysics Data System (ADS)

    Chabay, Ruth

    2001-06-01

    VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.

  17. A Study on Markerless AR-Based Infant Education System Using CBIR

    NASA Astrophysics Data System (ADS)

    Lim, Ji-Hoon; Kim, Seoksoo

    Block play is widely known to be effective to help a child develop emotionally and physically based on learning by a sense of sight and touch. But block play can not expect to have learning effects through a sense of hearing. Therefore, in this study, such limitations are overcome by a method that recognizes an object made up of blocks, not a marker-based method generally used for an AR environment, a matching technology enabling an object to be perceived in every direction, and a technology combining images of the real world with 2D/3D images/pictures/sounds of a similar object. Also, an education system for children aged 3~5 is designed to implement markerless AR with the CBIR method.

  18. SAVA 3: A testbed for integration and control of visual processes

    NASA Technical Reports Server (NTRS)

    Crowley, James L.; Christensen, Henrik

    1994-01-01

    The development of an experimental test-bed to investigate the integration and control of perception in a continuously operating vision system is described. The test-bed integrates a 12 axis robotic stereo camera head mounted on a mobile robot, dedicated computer boards for real-time image acquisition and processing, and a distributed system for image description. The architecture was designed to: (1) be continuously operating, (2) integrate software contributions from geographically dispersed laboratories, (3) integrate description of the environment with 2D measurements, 3D models, and recognition of objects, (4) capable of supporting diverse experiments in gaze control, visual servoing, navigation, and object surveillance, and (5) dynamically reconfiguarable.

  19. Accommodation measurements of horizontally scanning holographic display.

    PubMed

    Takaki, Yasuhiro; Yokouchi, Masahito

    2012-02-13

    Eye accommodation is considered to function properly for three-dimensional (3D) images generated by holography. We developed a horizontally scanning holographic display technique that enlarges both the screen size and viewing zone angle. A 3D image generated by this technique can be easily seen by both eyes. In this study, we measured the accommodation responses to a 3D image generated by the horizontally scanning holographic display technique that has a horizontal viewing zone angle of 14.6° and screen size of 4.3 in. We found that the accommodation responses to a 3D image displayed within 400 mm from the display screen were similar to those of a real object.

  20. 3D Reconstruction of Space Objects from Multi-Views by a Visible Sensor

    PubMed Central

    Zhang, Haopeng; Wei, Quanmao; Jiang, Zhiguo

    2017-01-01

    In this paper, a novel 3D reconstruction framework is proposed to recover the 3D structural model of a space object from its multi-view images captured by a visible sensor. Given an image sequence, this framework first estimates the relative camera poses and recovers the depths of the surface points by the structure from motion (SFM) method, then the patch-based multi-view stereo (PMVS) algorithm is utilized to generate a dense 3D point cloud. To resolve the wrong matches arising from the symmetric structure and repeated textures of space objects, a new strategy is introduced, in which images are added to SFM in imaging order. Meanwhile, a refining process exploiting the structural prior knowledge that most sub-components of artificial space objects are composed of basic geometric shapes is proposed and applied to the recovered point cloud. The proposed reconstruction framework is tested on both simulated image datasets and real image datasets. Experimental results illustrate that the recovered point cloud models of space objects are accurate and have a complete coverage of the surface. Moreover, outliers and points with severe noise are effectively filtered out by the refinement, resulting in an distinct improvement of the structure and visualization of the recovered points. PMID:28737675

  1. Detecting method of subjects' 3D positions and experimental advanced camera control system

    NASA Astrophysics Data System (ADS)

    Kato, Daiichiro; Abe, Kazuo; Ishikawa, Akio; Yamada, Mitsuho; Suzuki, Takahito; Kuwashima, Shigesumi

    1997-04-01

    Steady progress is being made in the development of an intelligent robot camera capable of automatically shooting pictures with a powerful sense of reality or tracking objects whose shooting requires advanced techniques. Currently, only experienced broadcasting cameramen can provide these pictures.TO develop an intelligent robot camera with these abilities, we need to clearly understand how a broadcasting cameraman assesses his shooting situation and how his camera is moved during shooting. We use a real- time analyzer to study a cameraman's work and his gaze movements at studios and during sports broadcasts. This time, we have developed a detecting method of subjects' 3D positions and an experimental camera control system to help us further understand the movements required for an intelligent robot camera. The features are as follows: (1) Two sensor cameras shoot a moving subject and detect colors, producing its 3D coordinates. (2) Capable of driving a camera based on camera movement data obtained by a real-time analyzer. 'Moving shoot' is the name we have given to the object position detection technology on which this system is based. We used it in a soccer game, producing computer graphics showing how players moved. These results will also be reported.

  2. Multispectral high-resolution hologram generation using orthographic projection images

    NASA Astrophysics Data System (ADS)

    Muniraj, I.; Guo, C.; Sheridan, J. T.

    2016-08-01

    We present a new method of synthesizing a digital hologram of three-dimensional (3D) real-world objects from multiple orthographic projection images (OPI). A high-resolution multiple perspectives of 3D objects (i.e., two dimensional elemental image array) are captured under incoherent white light using synthetic aperture integral imaging (SAII) technique and their OPIs are obtained respectively. The reference beam is then multiplied with the corresponding OPI and integrated to form a Fourier hologram. Eventually, a modified phase retrieval algorithm (GS/HIO) is applied to reconstruct the hologram. The principle is validated experimentally and the results support the feasibility of the proposed method.

  3. Augmented reality on poster presentations, in the field and in the classroom

    NASA Astrophysics Data System (ADS)

    Hawemann, Friedrich; Kolawole, Folarin

    2017-04-01

    Augmented reality (AR) is the direct addition of virtual information through an interface to a real-world environment. In practice, through a mobile device such as a tablet or smartphone, information can be projected onto a target- for example, an image on a poster. Mobile devices are widely distributed today such that augmented reality is easily accessible to almost everyone. Numerous studies have shown that multi-dimensional visualization is essential for efficient perception of the spatial, temporal and geometrical configuration of geological structures and processes. Print media, such as posters and handouts lack the ability to display content in the third and fourth dimensions, which might be in space-domain as seen in three-dimensional (3-D) objects, or time-domain (four-dimensional, 4-D) expressible in the form of videos. Here, we show that augmented reality content can be complimentary to geoscience poster presentations, hands-on material and in the field. In the latter example, location based data is loaded and for example, a virtual geological profile can be draped over a real-world landscape. In object based AR, the application is trained to recognize an image or object through the camera of the user's mobile device, such that specific content is automatically downloaded and displayed on the screen of the device, and positioned relative to the trained image or object. We used ZapWorks, a commercially-available software application to create and present examples of content that is poster-based, in which important supplementary information is presented as interactive virtual images, videos and 3-D models. We suggest that the flexibility and real-time interactivity offered by AR makes it an invaluable tool for effective geoscience poster presentation, class-room and field geoscience learning.

  4. Real-time dose calculation and visualization for the proton therapy of ocular tumours

    NASA Astrophysics Data System (ADS)

    Pfeiffer, Karsten; Bendl, Rolf

    2001-03-01

    A new real-time dose calculation and visualization was developed as part of the new 3D treatment planning tool OCTOPUS for proton therapy of ocular tumours within a national research project together with the Hahn-Meitner Institut Berlin. The implementation resolves the common separation between parameter definition, dose calculation and evaluation and allows a direct examination of the expected dose distribution while adjusting the treatment parameters. The new tool allows the therapist to move the desired dose distribution under visual control in 3D to the appropriate place. The visualization of the resulting dose distribution as a 3D surface model, on any 2D slice or on the surface of specified ocular structures is done automatically when adapting parameters during the planning process. In addition, approximate dose volume histograms may be calculated with little extra time. The dose distribution is calculated and visualized in 200 ms with an accuracy of 6% for the 3D isodose surfaces and 8% for other objects. This paper discusses the advantages and limitations of this new approach.

  5. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    NASA Astrophysics Data System (ADS)

    Ravindran, V. R.; Sreelakshmi, C.; Vibin, Vibin

    2008-09-01

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CT image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.

  6. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ravindran, V. R.; Sreelakshmi, C.; Vibin

    2008-09-26

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CTmore » image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.« less

  7. Towards a 3d Based Platform for Cultural Heritage Site Survey and Virtual Exploration

    NASA Astrophysics Data System (ADS)

    Seinturier, J.; Riedinger, C.; Mahiddine, A.; Peloso, D.; Boï, J.-M.; Merad, D.; Drap, P.

    2013-07-01

    This paper present a 3D platform that enables to make both cultural heritage site survey and its virtual exploration. It provides a single and easy way to use framework for merging multi scaled 3D measurements based on photogrammetry, documentation produced by experts and the knowledge of involved domains leaving the experts able to extract and choose the relevant information to produce the final survey. Taking into account the interpretation of the real world during the process of archaeological surveys is in fact the main goal of a survey. New advances in photogrammetry and the capability to produce dense 3D point clouds do not solve the problem of surveys. New opportunities for 3D representation are now available and we must to use them and find new ways to link geometry and knowledge. The new platform is able to efficiently manage and process large 3D data (points set, meshes) thanks to the implementation of space partition methods coming from the state of the art such as octrees and kd-trees and thus can interact with dense point clouds (thousands to millions of points) in real time. The semantisation of raw 3D data relies on geometric algorithms such as geodetic path computation, surface extraction from dense points cloud and geometrical primitive optimization. The platform provide an interface that enables expert to describe geometric representations of interesting objects like ashlar blocs, stratigraphic units or generic items (contour, lines, … ) directly onto the 3D representation of the site and without explicit links to underlying algorithms. The platform provide two ways for describing geometric representation. If oriented photographs are available, the expert can draw geometry on a photograph and the system computes its 3D representation by projection on the underlying mesh or the points cloud. If photographs are not available or if the expert wants to only use the 3D representation then he can simply draw objects shape on it. When 3D representations of objects of a surveyed site are extracted from the mesh, the link with domain related documentation is done by means of a set of forms designed by experts. Information from these forms are linked with geometry such that documentation can be attached to the viewed objects. Additional semantisation methods related to specific domains have been added to the platform. Beyond realistic rendering of surveyed site, the platform embeds non photorealistic rendering (NPR) algorithms. These algorithms enable to dynamically illustrate objects of interest that are related to knowledge with specific styles. The whole platform is implemented with a Java framework and relies on an actual and effective 3D engine that make available latest rendering methods. We illustrate this work on various photogrammetric survey, in medieval archaeology with the Shawbak castle in Jordan and in underwater archaeology on different marine sites.

  8. 3D thermography in non-destructive testing of composite structures

    NASA Astrophysics Data System (ADS)

    Hellstein, Piotr; Szwedo, Mariusz

    2016-12-01

    The combination of 3D scanners and infrared cameras has lead to the introduction of 3D thermography. Such analysis produces results in the form of three-dimensional thermograms, where the temperatures are mapped on a 3D model reconstruction of the inspected object. All work in the field of 3D thermography focused on its utility in passive thermography inspections. The authors propose a new real-time 3D temperature mapping method, which for the first time can be applied to active thermography analyses. All steps required to utilise 3D thermography are discussed, starting from acquisition of three-dimensional and infrared data, going through image processing and scene reconstruction, finishing with thermal projection and ray-tracing visualisation techniques. The application of the developed method was tested during diagnosis of several industrial composite structures—boats, planes and wind turbine blades.

  9. Picture object recognition in an American black bear (Ursus americanus).

    PubMed

    Johnson-Ulrich, Zoe; Vonk, Jennifer; Humbyrd, Mary; Crowley, Marilyn; Wojtkowski, Ela; Yates, Florence; Allard, Stephanie

    2016-11-01

    Many animals have been tested for conceptual discriminations using two-dimensional images as stimuli, and many of these species appear to transfer knowledge from 2D images to analogous real life objects. We tested an American black bear for picture-object recognition using a two alternative forced choice task. She was presented with four unique sets of objects and corresponding pictures. The bear showed generalization from both objects to pictures and pictures to objects; however, her transfer was superior when transferring from real objects to pictures, suggesting that bears can recognize visual features from real objects within photographic images during discriminations.

  10. Improving Semantic Updating Method on 3d City Models Using Hybrid Semantic-Geometric 3d Segmentation Technique

    NASA Astrophysics Data System (ADS)

    Sharkawi, K.-H.; Abdul-Rahman, A.

    2013-09-01

    Cities and urban areas entities such as building structures are becoming more complex as the modern human civilizations continue to evolve. The ability to plan and manage every territory especially the urban areas is very important to every government in the world. Planning and managing cities and urban areas based on printed maps and 2D data are getting insufficient and inefficient to cope with the complexity of the new developments in big cities. The emergence of 3D city models have boosted the efficiency in analysing and managing urban areas as the 3D data are proven to represent the real world object more accurately. It has since been adopted as the new trend in buildings and urban management and planning applications. Nowadays, many countries around the world have been generating virtual 3D representation of their major cities. The growing interest in improving the usability of 3D city models has resulted in the development of various tools for analysis based on the 3D city models. Today, 3D city models are generated for various purposes such as for tourism, location-based services, disaster management and urban planning. Meanwhile, modelling 3D objects are getting easier with the emergence of the user-friendly tools for 3D modelling available in the market. Generating 3D buildings with high accuracy also has become easier with the availability of airborne Lidar and terrestrial laser scanning equipments. The availability and accessibility to this technology makes it more sensible to analyse buildings in urban areas using 3D data as it accurately represent the real world objects. The Open Geospatial Consortium (OGC) has accepted CityGML specifications as one of the international standards for representing and exchanging spatial data, making it easier to visualize, store and manage 3D city models data efficiently. CityGML able to represents the semantics, geometry, topology and appearance of 3D city models in five well-defined Level-of-Details (LoD), namely LoD0 to LoD4. The accuracy and structural complexity of the 3D objects increases with the LoD level where LoD0 is the simplest LoD (2.5D; Digital Terrain Model (DTM) + building or roof print) while LoD4 is the most complex LoD (architectural details with interior structures). Semantic information is one of the main components in CityGML and 3D City Models, and provides important information for any analyses. However, more often than not, the semantic information is not available for the 3D city model due to the unstandardized modelling process. One of the examples is where a building is normally generated as one object (without specific feature layers such as Roof, Ground floor, Level 1, Level 2, Block A, Block B, etc). This research attempts to develop a method to improve the semantic data updating process by segmenting the 3D building into simpler parts which will make it easier for the users to select and update the semantic information. The methodology is implemented for 3D buildings in LoD2 where the buildings are generated without architectural details but with distinct roof structures. This paper also introduces hybrid semantic-geometric 3D segmentation method that deals with hierarchical segmentation of a 3D building based on its semantic value and surface characteristics, fitted by one of the predefined primitives. For future work, the segmentation method will be implemented as part of the change detection module that can detect any changes on the 3D buildings, store and retrieve semantic information of the changed structure, automatically updates the 3D models and visualize the results in a userfriendly graphical user interface (GUI).

  11. D Tracking Based Augmented Reality for Cultural Heritage Data Management

    NASA Astrophysics Data System (ADS)

    Battini, C.; Landi, G.

    2015-02-01

    The development of contactless documentation techniques is allowing researchers to collect high volumes of three-dimensional data in a short time but with high levels of accuracy. The digitalisation of cultural heritage opens up the possibility of using image processing and analysis, and computer graphics techniques, to preserve this heritage for future generations; augmenting it with additional information or with new possibilities for its enjoyment and use. The collection of precise datasets about cultural heritage status is crucial for its interpretation, its conservation and during the restoration processes. The application of digital-imaging solutions for various feature extraction, image data-analysis techniques, and three-dimensional reconstruction of ancient artworks, allows the creation of multidimensional models that can incorporate information coming from heterogeneous data sets, research results and historical sources. Real objects can be scanned and reconstructed virtually, with high levels of data accuracy and resolution. Real-time visualisation software and hardware is rapidly evolving and complex three-dimensional models can be interactively visualised and explored on applications developed for mobile devices. This paper will show how a 3D reconstruction of an object, with multiple layers of information, can be stored and visualised through a mobile application that will allow interaction with a physical object for its study and analysis, using 3D Tracking based Augmented Reality techniques.

  12. Improved docking alignment system

    NASA Technical Reports Server (NTRS)

    Monford, Leo G. (Inventor)

    1988-01-01

    Improved techniques are provided for the alignment of two objects. The present invention is particularly suited for 3-D translation and 3-D rotational alignment of objects in outer space. A camera is affixed to one object, such as a remote manipulator arm of the spacecraft, while the planar reflective surface is affixed to the other object, such as a grapple fixture. A monitor displays in real-time images from the camera such that the monitor displays both the reflected image of the camera and visible marking on the planar reflective surface when the objects are in proper alignment. The monitor may thus be viewed by the operator and the arm manipulated so that the reflective surface is perpendicular to the optical axis of the camera, the roll of the reflective surface is at a selected angle with respect to the camera, and the camera is spaced a pre-selected distance from the reflective surface.

  13. 3-D Object Pose Determination Using Complex EGI

    DTIC Science & Technology

    1990-10-01

    the length of edges of the polyhedron from the EGI. Dane and Bajcsy [4] make use of the Gaussian Image to spatially segment a group of range points...involving real range data of two smooth objects were conducted. The two smooth objects are the torus and ellipsoid, whose databases have been created...in the simulations earlier. 5.0.1 Implementational Issues The torus and ellipsoid were crafted out of clay to resemble the models whose databases were

  14. 3D Laser Scanner for Underwater Manipulation.

    PubMed

    Palomer, Albert; Ridao, Pere; Youakim, Dina; Ribas, David; Forest, Josep; Petillot, Yvan

    2018-04-04

    Nowadays, research in autonomous underwater manipulation has demonstrated simple applications like picking an object from the sea floor, turning a valve or plugging and unplugging a connector. These are fairly simple tasks compared with those already demonstrated by the mobile robotics community, which include, among others, safe arm motion within areas populated with a priori unknown obstacles or the recognition and location of objects based on their 3D model to grasp them. Kinect-like 3D sensors have contributed significantly to the advance of mobile manipulation providing 3D sensing capabilities in real-time at low cost. Unfortunately, the underwater robotics community is lacking a 3D sensor with similar capabilities to provide rich 3D information of the work space. In this paper, we present a new underwater 3D laser scanner and demonstrate its capabilities for underwater manipulation. In order to use this sensor in conjunction with manipulators, a calibration method to find the relative position between the manipulator and the 3D laser scanner is presented. Then, two different advanced underwater manipulation tasks beyond the state of the art are demonstrated using two different manipulation systems. First, an eight Degrees of Freedom (DoF) fixed-base manipulator system is used to demonstrate arm motion within a work space populated with a priori unknown fixed obstacles. Next, an eight DoF free floating Underwater Vehicle-Manipulator System (UVMS) is used to autonomously grasp an object from the bottom of a water tank.

  15. Improving multiple sclerosis management and collecting safety information in the real world: the MSDS3D software approach.

    PubMed

    Haase, Rocco; Wunderlich, Maria; Dillenseger, Anja; Kern, Raimar; Akgün, Katja; Ziemssen, Tjalf

    2018-04-01

    For safety evaluation, randomized controlled trials (RCTs) are not fully able to identify rare adverse events. The richest source of safety data lies in the post-marketing phase. Real-world evidence (RWE) and observational studies are becoming increasingly popular because they reflect usefulness of drugs in real life and have the ability to discover uncommon or rare adverse drug reactions. Areas covered: Adding the documentation of psychological symptoms and other medical disciplines, the necessity for a complex documentation becomes apparent. The collection of high-quality data sets in clinical practice requires the use of special documentation software as the quality of data in RWE studies can be an issue in contrast to the data obtained from RCTs. The MSDS3D software combines documentation of patient data with patient management of patients with multiple sclerosis. Following a continuous development over several treatment-specific modules, we improved and expanded the realization of safety management in MSDS3D with regard to the characteristics of different treatments and populations. Expert opinion: eHealth-enhanced post-authorisation safety study may complete the fundamental quest of RWE for individually improved treatment decisions and balanced therapeutic risk assessment. MSDS3D is carefully designed to contribute to every single objective in this process.

  16. High-immersion three-dimensional display of the numerical computer model

    NASA Astrophysics Data System (ADS)

    Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu

    2013-08-01

    High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.

  17. a Proposal for Generalization of 3d Models

    NASA Astrophysics Data System (ADS)

    Uyar, A.; Ulugtekin, N. N.

    2017-11-01

    In recent years, 3D models have been created of many cities around the world. Most of the 3D city models have been introduced as completely graphic or geometric models, and the semantic and topographic aspects of the models have been neglected. In order to use 3D city models beyond the task, a generalization is necessary. CityGML is an open data model and XML-based format for the storage and exchange of virtual 3D city models. Level of Details (LoD) which is an important concept for 3D modelling, can be defined as outlined degree or prior representation of real-world objects. The paper aim is first describes some requirements of 3D model generalization, then presents problems and approaches that have been developed in recent years. In conclude the paper will be a summary and outlook on problems and future work.

  18. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-06-24

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  19. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  20. Target-locking acquisition with real-time confocal (TARC) microscopy.

    PubMed

    Lu, Peter J; Sims, Peter A; Oki, Hidekazu; Macarthur, James B; Weitz, David A

    2007-07-09

    We present a real-time target-locking confocal microscope that follows an object moving along an arbitrary path, even as it simultaneously changes its shape, size and orientation. This Target-locking Acquisition with Realtime Confocal (TARC) microscopy system integrates fast image processing and rapid image acquisition using a Nipkow spinning-disk confocal microscope. The system acquires a 3D stack of images, performs a full structural analysis to locate a feature of interest, moves the sample in response, and then collects the next 3D image stack. In this way, data collection is dynamically adjusted to keep a moving object centered in the field of view. We demonstrate the system's capabilities by target-locking freely-diffusing clusters of attractive colloidal particles, and activelytransported quantum dots (QDs) endocytosed into live cells free to move in three dimensions, for several hours. During this time, both the colloidal clusters and live cells move distances several times the length of the imaging volume.

  1. D Data Acquisition Based on Opencv for Close-Range Photogrammetry Applications

    NASA Astrophysics Data System (ADS)

    Jurjević, L.; Gašparović, M.

    2017-05-01

    Development of the technology in the area of the cameras, computers and algorithms for 3D the reconstruction of the objects from the images resulted in the increased popularity of the photogrammetry. Algorithms for the 3D model reconstruction are so advanced that almost anyone can make a 3D model of photographed object. The main goal of this paper is to examine the possibility of obtaining 3D data for the purposes of the close-range photogrammetry applications, based on the open source technologies. All steps of obtaining 3D point cloud are covered in this paper. Special attention is given to the camera calibration, for which two-step process of calibration is used. Both, presented algorithm and accuracy of the point cloud are tested by calculating the spatial difference between referent and produced point clouds. During algorithm testing, robustness and swiftness of obtaining 3D data is noted, and certainly usage of this and similar algorithms has a lot of potential in the real-time application. That is the reason why this research can find its application in the architecture, spatial planning, protection of cultural heritage, forensic, mechanical engineering, traffic management, medicine and other sciences.

  2. Augmented Reality versus Virtual Reality for 3D Object Manipulation.

    PubMed

    Krichenbauer, Max; Yamamoto, Goshiro; Taketom, Takafumi; Sandor, Christian; Kato, Hirokazu

    2018-02-01

    Virtual Reality (VR) Head-Mounted Displays (HMDs) are on the verge of becoming commodity hardware available to the average user and feasible to use as a tool for 3D work. Some HMDs include front-facing cameras, enabling Augmented Reality (AR) functionality. Apart from avoiding collisions with the environment, interaction with virtual objects may also be affected by seeing the real environment. However, whether these effects are positive or negative has not yet been studied extensively. For most tasks it is unknown whether AR has any advantage over VR. In this work we present the results of a user study in which we compared user performance measured in task completion time on a 9 degrees of freedom object selection and transformation task performed either in AR or VR, both with a 3D input device and a mouse. Our results show faster task completion time in AR over VR. When using a 3D input device, a purely VR environment increased task completion time by 22.5 percent on average compared to AR ( ). Surprisingly, a similar effect occurred when using a mouse: users were about 17.3 percent slower in VR than in AR ( ). Mouse and 3D input device produced similar task completion times in each condition (AR or VR) respectively. We further found no differences in reported comfort.

  3. a Method for the Registration of Hemispherical Photographs and Tls Intensity Images

    NASA Astrophysics Data System (ADS)

    Schmidt, A.; Schilling, A.; Maas, H.-G.

    2012-07-01

    Terrestrial laser scanners generate dense and accurate 3D point clouds with minimal effort, which represent the geometry of real objects, while image data contains texture information of object surfaces. Based on the complementary characteristics of both data sets, a combination is very appealing for many applications, including forest-related tasks. In the scope of our research project, independent data sets of a plain birch stand have been taken by a full-spherical laser scanner and a hemispherical digital camera. Previously, both kinds of data sets have been considered separately: Individual trees were successfully extracted from large 3D point clouds, and so-called forest inventory parameters could be determined. Additionally, a simplified tree topology representation was retrieved. From hemispherical images, leaf area index (LAI) values, as a very relevant parameter for describing a stand, have been computed. The objective of our approach is to merge a 3D point cloud with image data in a way that RGB values are assigned to each 3D point. So far, segmentation and classification of TLS point clouds in forestry applications was mainly based on geometrical aspects of the data set. However, a 3D point cloud with colour information provides valuable cues exceeding simple statistical evaluation of geometrical object features and thus may facilitate the analysis of the scan data significantly.

  4. 3D printing and IoT for personalized everyday objects in nursing and healthcare

    NASA Astrophysics Data System (ADS)

    Asano, Yoshihiro; Tanaka, Hiroya; Miyagawa, Shoko; Yoshioka, Junki

    2017-04-01

    Today, application of 3D printing technology for medical use is getting popular. It strongly helps to make complicated shape of body parts with functional materials. We can complement injured, weakened or lacked parts, and recover original shape and functions. However, these cases are mainly focusing on the symptom itself, not on everyday lives of patients. With life span extending, many of us will live a life with chronic disease for long time. Then, we should think about our living environment more carefully. For example, we can make personalized everyday objects and support their body and mind. Therefore, we use 3D printing for making everyday objects from nursing / healthcare perspective. In this project, we have 2 main research questions. The first one is how to make objects which patients really require. We invited many kinds of people such as engineer, nurses and patients to our research activity. Nurses can find patient's real demands firstly, and engineers support them with rapid prototyping. Finally, we found the best collaboration methodologies among nurses, engineers and patients. The second question is how to trace and evaluate usages of created objects. Apparently, it's difficult to monitor user's activity for a long time. So we're developing the IoT sensing system, which monitor activities remotely. We enclose a data logger which can lasts about one month with 3D printed objects. After one month, we can pick up the data from objects and understand how it has been used.

  5. High-purity 3D nano-objects grown by focused-electron-beam induced deposition.

    PubMed

    Córdoba, Rosa; Sharma, Nidhi; Kölling, Sebastian; Koenraad, Paul M; Koopmans, Bert

    2016-09-02

    To increase the efficiency of current electronics, a specific challenge for the next generation of memory, sensing and logic devices is to find suitable strategies to move from two- to three-dimensional (3D) architectures. However, the creation of real 3D nano-objects is not trivial. Emerging non-conventional nanofabrication tools are required for this purpose. One attractive method is focused-electron-beam induced deposition (FEBID), a direct-write process of 3D nano-objects. Here, we grow 3D iron and cobalt nanopillars by FEBID using diiron nonacarbonyl Fe2(CO)9, and dicobalt octacarbonyl Co2(CO)8, respectively, as starting materials. In addition, we systematically study the composition of these nanopillars at the sub-nanometer scale by atom probe tomography, explicitly mapping the homogeneity of the radial and longitudinal composition distributions. We show a way of fabricating high-purity 3D vertical nanostructures of ∼50 nm in diameter and a few micrometers in length. Our results suggest that the purity of such 3D nanoelements (above 90 at% Fe and above 95 at% Co) is directly linked to their growth regime, in which the selected deposition conditions are crucial for the final quality of the nanostructure. Moreover, we demonstrate that FEBID and the proposed characterization technique not only allow for growth and chemical analysis of single-element structures, but also offers a new way to directly study 3D core-shell architectures. This straightforward concept could establish a promising route to the design of 3D elements for future nano-electronic devices.

  6. High-purity 3D nano-objects grown by focused-electron-beam induced deposition

    NASA Astrophysics Data System (ADS)

    Córdoba, Rosa; Sharma, Nidhi; Kölling, Sebastian; Koenraad, Paul M.; Koopmans, Bert

    2016-09-01

    To increase the efficiency of current electronics, a specific challenge for the next generation of memory, sensing and logic devices is to find suitable strategies to move from two- to three-dimensional (3D) architectures. However, the creation of real 3D nano-objects is not trivial. Emerging non-conventional nanofabrication tools are required for this purpose. One attractive method is focused-electron-beam induced deposition (FEBID), a direct-write process of 3D nano-objects. Here, we grow 3D iron and cobalt nanopillars by FEBID using diiron nonacarbonyl Fe2(CO)9, and dicobalt octacarbonyl Co2(CO)8, respectively, as starting materials. In addition, we systematically study the composition of these nanopillars at the sub-nanometer scale by atom probe tomography, explicitly mapping the homogeneity of the radial and longitudinal composition distributions. We show a way of fabricating high-purity 3D vertical nanostructures of ˜50 nm in diameter and a few micrometers in length. Our results suggest that the purity of such 3D nanoelements (above 90 at% Fe and above 95 at% Co) is directly linked to their growth regime, in which the selected deposition conditions are crucial for the final quality of the nanostructure. Moreover, we demonstrate that FEBID and the proposed characterization technique not only allow for growth and chemical analysis of single-element structures, but also offers a new way to directly study 3D core-shell architectures. This straightforward concept could establish a promising route to the design of 3D elements for future nano-electronic devices.

  7. (abstract) A High Throughput 3-D Inner Product Processor

    NASA Technical Reports Server (NTRS)

    Daud, Tuan

    1996-01-01

    A particularily challenging image processing application is the real time scene acquisition and object discrimination. It requires spatio-temporal recognition of point and resolved objects at high speeds with parallel processing algorithms. Neural network paradigms provide fine grain parallism and, when implemented in hardware, offer orders of magnitude speed up. However, neural networks implemented on a VLSI chip are planer architectures capable of efficient processing of linear vector signals rather than 2-D images. Therefore, for processing of images, a 3-D stack of neural-net ICs receiving planar inputs and consuming minimal power are required. Details of the circuits with chip architectures will be described with need to develop ultralow-power electronics. Further, use of the architecture in a system for high-speed processing will be illustrated.

  8. Interferometric and nonlinear-optical spectral-imaging techniques for outer space and live cells

    NASA Astrophysics Data System (ADS)

    Itoh, Kazuyoshi

    2015-12-01

    Multidimensional signals such as the spectral images allow us to have deeper insights into the natures of objects. In this paper the spectral imaging techniques that are based on optical interferometry and nonlinear optics are presented. The interferometric imaging technique is based on the unified theory of Van Cittert-Zernike and Wiener-Khintchine theorems and allows us to retrieve a spectral image of an object in the far zone from the 3D spatial coherence function. The retrieval principle is explained using a very simple object. The promising applications to space interferometers for astronomy that are currently in progress will also be briefly touched on. An interesting extension of interferometric spectral imaging is a 3D and spectral imaging technique that records 4D information of objects where the 3D and spectral information is retrieved from the cross-spectral density function of optical field. The 3D imaging is realized via the numerical inverse propagation of the cross-spectral density. A few techniques suggested recently are introduced. The nonlinear optical technique that utilizes stimulated Raman scattering (SRS) for spectral imaging of biomedical targets is presented lastly. The strong signals of SRS permit us to get vibrational information of molecules in the live cell or tissue in real time. The vibrational information of unstained or unlabeled molecules is crucial especially for medical applications. The 3D information due to the optical nonlinearity is also the attractive feature of SRS spectral microscopy.

  9. Holographic three-dimensional telepresence using large-area photorefractive polymer.

    PubMed

    Blanche, P-A; Bablumian, A; Voorakaranam, R; Christenson, C; Lin, W; Gu, T; Flores, D; Wang, P; Hsieh, W-Y; Kathaperumal, M; Rachwal, B; Siddiqui, O; Thomas, J; Norwood, R A; Yamamoto, M; Peyghambarian, N

    2010-11-04

    Holography is a technique that is used to display objects or scenes in three dimensions. Such three-dimensional (3D) images, or holograms, can be seen with the unassisted eye and are very similar to how humans see the actual environment surrounding them. The concept of 3D telepresence, a real-time dynamic hologram depicting a scene occurring in a different location, has attracted considerable public interest since it was depicted in the original Star Wars film in 1977. However, the lack of sufficient computational power to produce realistic computer-generated holograms and the absence of large-area and dynamically updatable holographic recording media have prevented realization of the concept. Here we use a holographic stereographic technique and a photorefractive polymer material as the recording medium to demonstrate a holographic display that can refresh images every two seconds. A 50 Hz nanosecond pulsed laser is used to write the holographic pixels. Multicoloured holographic 3D images are produced by using angular multiplexing, and the full parallax display employs spatial multiplexing. 3D telepresence is demonstrated by taking multiple images from one location and transmitting the information via Ethernet to another location where the hologram is printed with the quasi-real-time dynamic 3D display. Further improvements could bring applications in telemedicine, prototyping, advertising, updatable 3D maps and entertainment.

  10. Vision System Measures Motions of Robot and External Objects

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2008-01-01

    A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.

  11. Application of velocity filtering to optical-flow passive ranging

    NASA Technical Reports Server (NTRS)

    Barniv, Yair

    1992-01-01

    The performance of the velocity filtering method as applied to optical-flow passive ranging under real-world conditions is evaluated. The theory of the 3-D Fourier transform as applied to constant-speed moving points is reviewed, and the space-domain shift-and-add algorithm is derived from the general 3-D matched filtering formulation. The constant-speed algorithm is then modified to fit the actual speed encountered in the optical flow application, and the passband of that filter is found in terms of depth (sensor/object distance) so as to cover any given range of depths. Two algorithmic solutions for the problems associated with pixel interpolation and object expansion are developed, and experimental results are presented.

  12. Review of Real-Time 3-Dimensional Image Guided Radiation Therapy on Standard-Equipped Cancer Radiation Therapy Systems: Are We at the Tipping Point for the Era of Real-Time Radiation Therapy?

    PubMed

    Keall, Paul J; Nguyen, Doan Trang; O'Brien, Ricky; Zhang, Pengpeng; Happersett, Laura; Bertholet, Jenny; Poulsen, Per R

    2018-04-14

    To review real-time 3-dimensional (3D) image guided radiation therapy (IGRT) on standard-equipped cancer radiation therapy systems, focusing on clinically implemented solutions. Three groups in 3 continents have clinically implemented novel real-time 3D IGRT solutions on standard-equipped linear accelerators. These technologies encompass kilovoltage, combined megavoltage-kilovoltage, and combined kilovoltage-optical imaging. The cancer sites treated span pelvic and abdominal tumors for which respiratory motion is present. For each method the 3D-measured motion during treatment is reported. After treatment, dose reconstruction was used to assess the treatment quality in the presence of motion with and without real-time 3D IGRT. The geometric accuracy was quantified through phantom experiments. A literature search was conducted to identify additional real-time 3D IGRT methods that could be clinically implemented in the near future. The real-time 3D IGRT methods were successfully clinically implemented and have been used to treat more than 200 patients. Systematic target position shifts were observed using all 3 methods. Dose reconstruction demonstrated that the delivered dose is closer to the planned dose with real-time 3D IGRT than without real-time 3D IGRT. In addition, compromised target dose coverage and variable normal tissue doses were found without real-time 3D IGRT. The geometric accuracy results with real-time 3D IGRT had a mean error of <0.5 mm and a standard deviation of <1.1 mm. Numerous additional articles exist that describe real-time 3D IGRT methods using standard-equipped radiation therapy systems that could also be clinically implemented. Multiple clinical implementations of real-time 3D IGRT on standard-equipped cancer radiation therapy systems have been demonstrated. Many more approaches that could be implemented were identified. These solutions provide a pathway for the broader adoption of methods to make radiation therapy more accurate, impacting tumor and normal tissue dose, margins, and ultimately patient outcomes. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Human observers are biased in judging the angular approach of a projectile.

    PubMed

    Welchman, Andrew E; Tuck, Val L; Harris, Julie M

    2004-01-01

    How do we decide whether an object approaching us will hit us? The optic array provides information sufficient for us to determine the approaching trajectory of a projectile. However, when using binocular information, observers report that trajectories near the mid-sagittal plane are wider than they actually are. Here we extend this work to consider stimuli containing additional depth cues. We measure observers' estimates of trajectory direction first for computer rendered, stereoscopically presented, rich-cue objects, and then for real objects moving in the world. We find that, under both rich cue conditions and with real moving objects, observers show positive bias, overestimating the angle of approach when movement is near the mid-sagittal plane. The findings question whether the visual system, using both binocular and monocular cues to depth, can make explicit estimates of the 3-D location and movement of objects in depth.

  14. Differences in the effects of crowding on size perception and grip scaling in densely cluttered 3-D scenes.

    PubMed

    Chen, Juan; Sperandio, Irene; Goodale, Melvyn Alan

    2015-01-01

    Objects rarely appear in isolation in natural scenes. Although many studies have investigated how nearby objects influence perception in cluttered scenes (i.e., crowding), none has studied how nearby objects influence visually guided action. In Experiment 1, we found that participants could scale their grasp to the size of a crowded target even when they could not perceive its size, demonstrating for the first time that neurologically intact participants can use visual information that is not available to conscious report to scale their grasp to real objects in real scenes. In Experiments 2 and 3, we found that changing the eccentricity of the display and the orientation of the flankers had no effect on grasping but strongly affected perception. The differential effects of eccentricity and flanker orientation on perception and grasping show that the known differences in retinotopy between the ventral and dorsal streams are reflected in the way in which people deal with targets in cluttered scenes. © The Author(s) 2014.

  15. Visualizing UAS-collected imagery using augmented reality

    NASA Astrophysics Data System (ADS)

    Conover, Damon M.; Beidleman, Brittany; McAlinden, Ryan; Borel-Donohue, Christoph C.

    2017-05-01

    One of the areas where augmented reality will have an impact is in the visualization of 3-D data. 3-D data has traditionally been viewed on a 2-D screen, which has limited its utility. Augmented reality head-mounted displays, such as the Microsoft HoloLens, make it possible to view 3-D data overlaid on the real world. This allows a user to view and interact with the data in ways similar to how they would interact with a physical 3-D object, such as moving, rotating, or walking around it. A type of 3-D data that is particularly useful for military applications is geo-specific 3-D terrain data, and the visualization of this data is critical for training, mission planning, intelligence, and improved situational awareness. Advances in Unmanned Aerial Systems (UAS), photogrammetry software, and rendering hardware have drastically reduced the technological and financial obstacles in collecting aerial imagery and in generating 3-D terrain maps from that imagery. Because of this, there is an increased need to develop new tools for the exploitation of 3-D data. We will demonstrate how the HoloLens can be used as a tool for visualizing 3-D terrain data. We will describe: 1) how UAScollected imagery is used to create 3-D terrain maps, 2) how those maps are deployed to the HoloLens, 3) how a user can view and manipulate the maps, and 4) how multiple users can view the same virtual 3-D object at the same time.

  16. Monocular correspondence detection for symmetrical objects by template matching

    NASA Astrophysics Data System (ADS)

    Vilmar, G.; Besslich, Philipp W., Jr.

    1990-09-01

    We describe a possibility to reconstruct 3-D information from a single view of an 3-D bilateral symmetric object. The symmetry assumption allows us to obtain a " second view" from a different viewpoint by a simple reflection of the monocular image. Therefore we have to solve the correspondence problem in a special case where known feature-based or area-based binocular approaches fail. In principle our approach is based on a frequency domain template matching of the features on the epipolar lines. During a training period our system " learns" the assignment of correspondence models to image features. The object shape is interpolated when no template matches to the image features. This fact is an important advantage of this methodology because no " real world" image holds the symmetry assumption perfectly. To simplify the training process we used single views on human faces (e. g. passport photos) but our system is trainable on any other kind of objects.

  17. Direct Measurement of Proximal Isovelocity Surface Area by Real-Time Three-Dimensional Color Doppler for Quantitation of Aortic Regurgitant Volume: An In Vitro Validation

    PubMed Central

    Pirat, Bahar; Little, Stephen H.; Igo, Stephen R.; McCulloch, Marti; Nosé, Yukihiko; Hartley, Craig J.; Zoghbi, William A.

    2012-01-01

    Objective The proximal isovelocity surface area (PISA) method is useful in the quantitation of aortic regurgitation (AR). We hypothesized that actual measurement of PISA provided with real-time 3-dimensional (3D) color Doppler yields more accurate regurgitant volumes than those estimated by 2-dimensional (2D) color Doppler PISA. Methods We developed a pulsatile flow model for AR with an imaging chamber in which interchangeable regurgitant orifices with defined shapes and areas were incorporated. An ultrasonic flow meter was used to calculate the reference regurgitant volumes. A total of 29 different flow conditions for 5 orifices with different shapes were tested at a rate of 72 beats/min. 2D PISA was calculated as 2π r2, and 3D PISA was measured from 8 equidistant radial planes of the 3D PISA. Regurgitant volume was derived as PISA × aliasing velocity × time velocity integral of AR/peak AR velocity. Results Regurgitant volumes by flow meter ranged between 12.6 and 30.6 mL/beat (mean 21.4 ± 5.5 mL/beat). Regurgitant volumes estimated by 2D PISA correlated well with volumes measured by flow meter (r = 0.69); however, a significant underestimation was observed (y = 0.5x + 0.6). Correlation with flow meter volumes was stronger for 3D PISA-derived regurgitant volumes (r = 0.83); significantly less underestimation of regurgitant volumes was seen, with a regression line close to identity (y = 0.9x + 3.9). Conclusion Direct measurement of PISA is feasible, without geometric assumptions, using real-time 3D color Doppler. Calculation of aortic regurgitant volumes with 3D color Doppler using this methodology is more accurate than conventional 2D method with hemispheric PISA assumption. PMID:19168322

  18. Use of 3D techniques for virtual production

    NASA Astrophysics Data System (ADS)

    Grau, Oliver; Price, Marc C.; Thomas, Graham A.

    2000-12-01

    Virtual production for broadcast is currently mainly used in the form of virtual studios, where the resulting media is a sequence of 2D images. With the steady increase of 3D computing power in home PCs and the technical progress in 3D display technology, the content industry is looking for new kinds of program material, which makes use of 3D technology. The applications range form analysis of sport scenes, 3DTV, up to the creation of fully immersive content. In a virtual studio a camera films one or more actors in a controlled environment. The pictures of the actors can be segmented very accurately in real time using chroma keying techniques. The isolated silhouette can be integrated into a new synthetic virtual environment using a studio mixer. The resulting shape description of the actors is 2D so far. For the realization of more sophisticated optical interactions of the actors with the virtual environment, such as occlusions and shadows, an object-based 3D description of scenes is needed. However, the requirements of shape accuracy, and the kind of representation, differ in accordance with the application. This contribution gives an overview of requirements and approaches for the generation of an object-based 3D description in various applications studied by the BBC R and D department. An enhanced Virtual Studio for 3D programs is proposed that covers a range of applications for virtual production.

  19. Spatially encoded phase-contrast MRI-3D MRI movies of 1D and 2D structures at millisecond resolution.

    PubMed

    Merboldt, Klaus-Dietmar; Uecker, Martin; Voit, Dirk; Frahm, Jens

    2011-10-01

    This work demonstrates that the principles underlying phase-contrast MRI may be used to encode spatial rather than flow information along a perpendicular dimension, if this dimension contains an MRI-visible object at only one spatial location. In particular, the situation applies to 3D mapping of curved 2D structures which requires only two projection images with different spatial phase-encoding gradients. These phase-contrast gradients define the field of view and mean spin-density positions of the object in the perpendicular dimension by respective phase differences. When combined with highly undersampled radial fast low angle shot (FLASH) and image reconstruction by regularized nonlinear inversion, spatial phase-contrast MRI allows for dynamic 3D mapping of 2D structures in real time. First examples include 3D MRI movies of the acting human hand at a temporal resolution of 50 ms. With an even simpler technique, 3D maps of curved 1D structures may be obtained from only three acquisitions of a frequency-encoded MRI signal with two perpendicular phase encodings. Here, 3D MRI movies of a rapidly rotating banana were obtained at 5 ms resolution or 200 frames per second. In conclusion, spatial phase-contrast 3D MRI of 2D or 1D structures is respective two or four orders of magnitude faster than conventional 3D MRI. Copyright © 2011 Wiley-Liss, Inc.

  20. 3D printing of natural organic materials by photochemistry

    NASA Astrophysics Data System (ADS)

    Da Silva Gonçalves, Joyce Laura; Valandro, Silvano Rodrigo; Wu, Hsiu-Fen; Lee, Yi-Hsiung; Mettra, Bastien; Monnereau, Cyrille; Schmitt Cavalheiro, Carla Cristina; Pawlicka, Agnieszka; Focsan, Monica; Lin, Chih-Lang; Baldeck, Patrice L.

    2016-03-01

    In previous works, we have used two-photon induced photochemistry to fabricate 3D microstructures based on proteins, anti-bodies, and enzymes for different types of bio-applications. Among them, we can cite collagen lines to guide the movement of living cells, peptide modified GFP biosensing pads to detect Gram positive bacteria, anti-body pads to determine the type of red blood cells, and trypsin columns in a microfluidic channel to obtain a real time biochemical micro-reactor. In this paper, we report for the first time on two-photon 3D microfabrication of DNA material. We also present our preliminary results on using a commercial 3D printer based on a video projector to polymerize slicing layers of gelatine-objects.

  1. Camera pose estimation for augmented reality in a small indoor dynamic scene

    NASA Astrophysics Data System (ADS)

    Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad

    2017-09-01

    Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.

  2. Congruence analysis of point clouds from unstable stereo image sequences

    NASA Astrophysics Data System (ADS)

    Jepping, C.; Bethmann, F.; Luhmann, T.

    2014-06-01

    This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking) that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis). For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.

  3. Automatic building LOD copies for multitextured objects

    NASA Astrophysics Data System (ADS)

    Souetov, Andrew E.

    2000-01-01

    This article is dedicated to the research of geometry level of detail technology for systems of real-time 3D visualization. The article includes the conditions of applicability of the method, overview of existing approaches, their drawbacks and advantages. New technology guidelines are suggested as an alternative to existing methods.

  4. 3D Viewing: Odd Perception - Illusion? reality? or both?

    NASA Astrophysics Data System (ADS)

    Kisimoto, K.; Iizasa, K.

    2008-12-01

    We live in the three dimensional space, don't we? It could be at least four dimensions, but that is another story. In either way our perceptual capability of 3D-Viewing is constrained by our 2D-perception (our intrinsic tools of perception). I carried out a few visual experiments using topographic data to show our intrinsic (or biological) disability (or shortcoming) in 3D-recognition of our world. Results of the experiments suggest: (1) 3D-surface model displayed on a 2D-computer screen (or paper) always has two interpretations of the 3D- surface geometry, if we choose one of the interpretation (in other word, if we are hooked by one perception of the two), we maintain its perception even if the 3D-model changes its viewing perspective in time shown on the screen, (2) more interesting is that 3D-real solid object (e.g.,made of clay) also gives above mentioned two interpretations of the geometry of the object, if we observe the object with one-eye. Most famous example of this viewing illusion is exemplified by a magician, who died in 2007, Jerry Andrus who made a super-cool paper crafted dragon which causes visual illusion to one-eyed viewer. I, by the experiments, confirmed this phenomenon in another perceptually persuasive (deceptive?) way. My conclusion is that this illusion is intrinsic, i.e. reality for human, because, even if we live in 3D-space, our perceptional tool (eyes) is composed of 2D sensors whose information is reconstructed or processed to 3D by our experience-based brain. So, (3) when we observe the 3D-surface-model on the computer screen, we are always one eye short even if we use both eyes. One last suggestion from my experiments is that recent highly sophisticated 3D- models might include too many information that human perceptions cannot handle properly, i.e. we might not be understanding the 3D world (geospace) at all, just illusioned.

  5. Tubular Crystals and Helical Arrays: Structural Determination of HIV-1 Capsid Assemblies Using Iterative Helical Real-Space Reconstruction

    PubMed Central

    Zhang, Peijun; Meng, Xin; Zhao, Gongpu

    2013-01-01

    Helical structures are important in many different life forms and are well-suited for structural studies by cryo-EM. A unique feature of helical objects is that a single projection image contains all the views needed to perform a three-dimensional (3D) crystallographic reconstruction. Here, we use HIV-1 capsid assemblies to illustrate the detailed approaches to obtain 3D density maps from helical objects. Mature HIV-1 particles contain a conical- or tubular-shaped capsid that encloses the viral RNA genome and performs essential functions in the virus life cycle. The capsid is composed of capsid protein (CA) oligomers which are helically arranged on the surface. The N-terminal domain (NTD) of CA is connected to its C-terminal domain (CTD) through a flexible hinge. Structural analysis of two- and three-dimensional crystals provided molecular models of the capsid protein (CA) and its oligomer forms. We determined the 3D density map of helically assembled HIV-1 CA hexamers at 16 Å resolution using an iterative helical real-space reconstruction method. Docking of atomic models of CA-NTD and CA-CTD dimer into the electron density map indicated that the CTD dimer interface is retained in the assembled CA. Furthermore, molecular docking revealed an additional, novel CTD trimer interface. PMID:23132072

  6. 3D visualization of two-phase flow in the micro-tube by a simple but effective method

    NASA Astrophysics Data System (ADS)

    Fu, X.; Zhang, P.; Hu, H.; Huang, C. J.; Huang, Y.; Wang, R. Z.

    2009-08-01

    The present study provides a simple but effective method for 3D visualization of the two-phase flow in the micro-tube. An isosceles right-angle prism combined with a mirror located 45° bevel to the prism is employed to synchronously obtain the front and side views of the flow patterns with a single camera, where the locations of the prism and the micro-tube for clear imaging should satisfy a fixed relationship which is specified in the present study. The optical design is proven successfully by the tough visualization work at the cryogenic temperature range. The image deformation due to the refraction and geometrical configuration of the test section is quantitatively investigated. It is calculated that the image is enlarged by about 20% in inner diameter compared to the real object, which is validated by the experimental results. Meanwhile, the image deformation by adding a rectangular optical correction box outside the circular tube is comparatively investigated. It is calculated that the image is reduced by about 20% in inner diameter with a rectangular optical correction box compared to the real object. The 3D re-construction process based on the two views is conducted through three steps, which shows that the 3D visualization method can easily be applied for two-phase flow research in micro-scale channels and improves the measurement accuracy of some important parameters of the two-phase flow such as void fraction, spatial distribution of bubbles, etc.

  7. Affective three-dimensional brain-computer interface created using a prism array-based display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul

    2014-12-01

    To avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we applied a prism array-based display when presenting three-dimensional (3-D) objects. Emotional pictures were used as visual stimuli to increase the signal-to-noise ratios of steady-state visually evoked potentials (SSVEPs) because involuntarily motivated selective attention by affective mechanisms can enhance SSVEP amplitudes, thus producing increased interaction efficiency. Ten male and nine female participants voluntarily participated in our experiments. Participants were asked to control objects under three viewing conditions: two-dimension (2-D), stereoscopic 3-D, and prism. The participants performed each condition in a counter-balanced order. One-way repeated measures analysis of variance showed significant increases in the positive predictive values in the prism condition compared to the 2-D and 3-D conditions. Participants' subjective ratings of realness and engagement were also significantly greater in the prism condition than in the 2-D and 3-D conditions, while the ratings for visual fatigue were significantly reduced in the prism condition than in the 3-D condition. The proposed methods are expected to enhance the sense of reality in 3-D space without causing critical visual fatigue. In addition, people who are especially susceptible to stereoscopic 3-D may be able to use the affective brain-computer interface.

  8. Implementation and clinical application of a deformation method for fast simulation of biological tissue formed by fibers and fluid.

    PubMed

    Sardinha, Ana Gabriella de Oliveira; Oyama, Ceres Nunes de Resende; de Mendonça Maroja, Armando; Costa, Ivan F

    2016-01-01

    The aim of this paper is to provide a general discussion, algorithm, and actual working programs of the deformation method for fast simulation of biological tissue formed by fibers and fluid. In order to demonstrate the benefit of the clinical applications software, we successfully used our computational program to deform a 3D breast image acquired from patients, using a 3D scanner, in a real hospital environment. The method implements a quasi-static solution for elastic global deformations of objects. Each pair of vertices of the surface is connected and defines an elastic fiber. The set of all the elastic fibers defines a mesh of smaller size than the volumetric meshes, allowing for simulation of complex objects with less computational effort. The behavior similar to the stress tensor is obtained by the volume conservation equation that mixes the 3D coordinates. Step by step, we show the computational implementation of this approach. As an example, a 2D rectangle formed by only 4 vertices is solved and, for this simple geometry, all intermediate results are shown. On the other hand, actual implementations of these ideas in the form of working computer routines are provided for general 3D objects, including a clinical application.

  9. 3D localization of electrophysiology catheters from a single x-ray cone-beam projection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert, Normand, E-mail: normand.robert@sri.utoronto.ca; Polack, George G.; Sethi, Benu

    2015-10-15

    Purpose: X-ray images allow the visualization of percutaneous devices such as catheters in real time but inherently lack depth information. The provision of 3D localization of these devices from cone beam x-ray projections would be advantageous for interventions such as electrophysiology (EP), whereby the operator needs to return a device to the same anatomical locations during the procedure. A method to achieve real-time 3D single view localization (SVL) of an object of known geometry from a single x-ray image is presented. SVL exploits the change in the magnification of an object as its distance from the x-ray source is varied.more » The x-ray projection of an object of interest is compared to a synthetic x-ray projection of a model of said object as its pose is varied. Methods: SVL was tested with a 3 mm spherical marker and an electrophysiology catheter. The effect of x-ray acquisition parameters on SVL was investigated. An independent reference localization method was developed to compare results when imaging a catheter translated via a computer controlled three-axes stage. SVL was also performed on clinical fluoroscopy image sequences. A commercial navigation system was used in some clinical image sequences for comparison. Results: SVL estimates exhibited little change as x-ray acquisition parameters were varied. The reproducibility of catheter position estimates in phantoms denoted by the standard deviations, (σ{sub x}, σ{sub y}, σ{sub z}) = (0.099 mm,  0.093 mm,  2.2 mm), where x and y are parallel to the detector plane and z is the distance from the x-ray source. Position estimates (x, y, z) exhibited a 4% systematic error (underestimation) when compared to the reference method. The authors demonstrated that EP catheters can be tracked in clinical fluoroscopic images. Conclusions: It has been shown that EP catheters can be localized in real time in phantoms and clinical images at fluoroscopic exposure rates. Further work is required to characterize performance in clinical images as well as the sensitivity to clinical image quality.« less

  10. Real and virtual explorations of the environment and interactive tracking of movable objects for the blind on the basis of tactile-acoustical maps and 3D environment models.

    PubMed

    Hub, Andreas; Hartter, Tim; Kombrink, Stefan; Ertl, Thomas

    2008-01-01

    PURPOSE.: This study describes the development of a multi-functional assistant system for the blind which combines localisation, real and virtual navigation within modelled environments and the identification and tracking of fixed and movable objects. The approximate position of buildings is determined with a global positioning sensor (GPS), then the user establishes exact position at a specific landmark, like a door. This location initialises indoor navigation, based on an inertial sensor, a step recognition algorithm and map. Tracking of movable objects is provided by another inertial sensor and a head-mounted stereo camera, combined with 3D environmental models. This study developed an algorithm based on shape and colour to identify objects and used a common face detection algorithm to inform the user of the presence and position of others. The system allows blind people to determine their position with approximately 1 metre accuracy. Virtual exploration of the environment can be accomplished by moving one's finger on a touch screen of a small portable tablet PC. The name of rooms, building features and hazards, modelled objects and their positions are presented acoustically or in Braille. Given adequate environmental models, this system offers blind people the opportunity to navigate independently and safely, even within unknown environments. Additionally, the system facilitates education and rehabilitation by providing, in several languages, object names, features and relative positions.

  11. Feasibility study: real-time 3-D ultrasound imaging of the brain.

    PubMed

    Smith, Stephen W; Chu, Kengyeh; Idriss, Salim F; Ivancevich, Nikolas M; Light, Edward D; Wolf, Patrick D

    2004-10-01

    We tested the feasibility of real-time, 3-D ultrasound (US) imaging in the brain. The 3-D scanner uses a matrix phased-array transducer of 512 transmit channels and 256 receive channels operating at 2.5 MHz with a 15-mm diameter footprint. The real-time system scans a 65 degrees pyramid, producing up to 30 volumetric scans per second, and features up to five image planes as well as 3-D rendering, 3-D pulsed-wave and color Doppler. In a human subject, the real-time 3-D scans produced simultaneous transcranial horizontal (axial), coronal and sagittal image planes and real-time volume-rendered images of the gross anatomy of the brain. In a transcranial sheep model, we obtained real-time 3-D color flow Doppler scans and perfusion images using bolus injection of contrast agents into the internal carotid artery.

  12. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    PubMed Central

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-01-01

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556

  13. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    PubMed

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-03-25

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  14. Utilising the Intel RealSense Camera for Measuring Health Outcomes in Clinical Research.

    PubMed

    Siena, Francesco Luke; Byrom, Bill; Watts, Paul; Breedon, Philip

    2018-02-05

    Applications utilising 3D Camera technologies for the measurement of health outcomes in the health and wellness sector continues to expand. The Intel® RealSense™ is one of the leading 3D depth sensing cameras currently available on the market and aligns itself for use in many applications, including robotics, automation, and medical systems. One of the most prominent areas is the production of interactive solutions for rehabilitation which includes gait analysis and facial tracking. Advancements in depth camera technology has resulted in a noticeable increase in the integration of these technologies into portable platforms, suggesting significant future potential for pervasive in-clinic and field based health assessment solutions. This paper reviews the Intel RealSense technology's technical capabilities and discusses its application to clinical research and includes examples where the Intel RealSense camera range has been used for the measurement of health outcomes. This review supports the use of the technology to develop robust, objective movement and mobility-based endpoints to enable accurate tracking of the effects of treatment interventions in clinical trials.

  15. Tracking multiple particles in fluorescence time-lapse microscopy images via probabilistic data association.

    PubMed

    Godinez, William J; Rohr, Karl

    2015-02-01

    Tracking subcellular structures as well as viral structures displayed as 'particles' in fluorescence microscopy images yields quantitative information on the underlying dynamical processes. We have developed an approach for tracking multiple fluorescent particles based on probabilistic data association. The approach combines a localization scheme that uses a bottom-up strategy based on the spot-enhancing filter as well as a top-down strategy based on an ellipsoidal sampling scheme that uses the Gaussian probability distributions computed by a Kalman filter. The localization scheme yields multiple measurements that are incorporated into the Kalman filter via a combined innovation, where the association probabilities are interpreted as weights calculated using an image likelihood. To track objects in close proximity, we compute the support of each image position relative to the neighboring objects of a tracked object and use this support to recalculate the weights. To cope with multiple motion models, we integrated the interacting multiple model algorithm. The approach has been successfully applied to synthetic 2-D and 3-D images as well as to real 2-D and 3-D microscopy images, and the performance has been quantified. In addition, the approach was successfully applied to the 2-D and 3-D image data of the recent Particle Tracking Challenge at the IEEE International Symposium on Biomedical Imaging (ISBI) 2012.

  16. 4D Near Real-Time Environmental Monitoring Using Highly Temporal LiDAR

    NASA Astrophysics Data System (ADS)

    Höfle, Bernhard; Canli, Ekrem; Schmitz, Evelyn; Crommelinck, Sophie; Hoffmeister, Dirk; Glade, Thomas

    2016-04-01

    The last decade has witnessed extensive applications of 3D environmental monitoring with the LiDAR technology, also referred to as laser scanning. Although several automatic methods were developed to extract environmental parameters from LiDAR point clouds, only little research has focused on highly multitemporal near real-time LiDAR (4D-LiDAR) for environmental monitoring. Large potential of applying 4D-LiDAR is given for landscape objects with high and varying rates of change (e.g. plant growth) and also for phenomena with sudden unpredictable changes (e.g. geomorphological processes). In this presentation we will report on the most recent findings of the research projects 4DEMON (http://uni-heidelberg.de/4demon) and NoeSLIDE (https://geomorph.univie.ac.at/forschung/projekte/aktuell/noeslide/). The method development in both projects is based on two real-world use cases: i) Surface parameter derivation of agricultural crops (e.g. crop height) and ii) change detection of landslides. Both projects exploit the "full history" contained in the LiDAR point cloud time series. One crucial initial step of 4D-LiDAR analysis is the co-registration over time, 3D-georeferencing and time-dependent quality assessment of the LiDAR point cloud time series. Due to the high amount of datasets (e.g. one full LiDAR scan per day), the procedure needs to be performed fully automatically. Furthermore, the online near real-time 4D monitoring system requires to set triggers that can detect removal or moving of tie reflectors (used for co-registration) or the scanner itself. This guarantees long-term data acquisition with high quality. We will present results from a georeferencing experiment for 4D-LiDAR monitoring, which performs benchmarking of co-registration, 3D-georeferencing and also fully automatic detection of events (e.g. removal/moving of reflectors or scanner). Secondly, we will show our empirical findings of an ongoing permanent LiDAR observation of a landslide (Gresten, Austria) and an agricultural maize crop stand (Heidelberg, Germany). This research demonstrates the potential and also limitations of fully automated, near real-time 4D LiDAR monitoring in geosciences.

  17. Clinical 3D printing: A protected health information (PHI) and compliance perspective.

    PubMed

    Feldman, Henry; Kamali, Parisa; Lin, Samuel J; Halamka, John D

    2018-07-01

    Advanced manufacturing techniques such as 3-dimensional (3D) printing, while mature in other industries, are starting to become more commonplace in clinical care. Clinicians are producing physical objects based on patient clinical data for use in planning care and educating patients, all of which should be managed like any other healthcare system data, except it exists in the "real" world. There are currently no provisions in the Health Insurance Portability and Accountability Act (HIPAA) either in its original 1996 form or in more recent updates that address the nature of physical representations of clinical data. We submit that if we define the source data as protected health information (PHI), then the objects 3D printed from that data need to be treated as both (PHI), and if used clinically, part of the clinical record, and propose some basic guidelines for quality and privacy like all documentation until regulatory frameworks can catch up to this technology. Many of the mechanisms designed in the paper and film chart era will work well with 3D printed patient data. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Time domain SAR raw data simulation using CST and image focusing of 3D objects

    NASA Astrophysics Data System (ADS)

    Saeed, Adnan; Hellwich, Olaf

    2017-10-01

    This paper presents the use of a general purpose electromagnetic simulator, CST, to simulate realistic synthetic aperture radar (SAR) raw data of three-dimensional objects. Raw data is later focused in MATLAB using range-doppler algorithm. Within CST Microwave Studio a replica of TerraSAR-X chirp signal is incident upon a modeled Corner Reflector (CR) whose design and material properties are identical to that of the real one. Defining mesh and other appropriate settings reflected wave is measured at several distant points within a line parallel to the viewing direction. This is analogous to an array antenna and is synthesized to create a long aperture for SAR processing. The time domain solver in CST is based on the solution of differential form of Maxwells equations. Exported data from CST is arranged into a 2-d matrix of axis range and azimuth. Hilbert transform is applied to convert the real signal to complex data with phase information. Range compression, range cell migration correction (RCMC), and azimuth compression are applied in time domain to obtain the final SAR image. This simulation can provide valuable information to clarify which real world objects cause images suitable for high accuracy identification in the SAR images.

  19. Object classification for obstacle avoidance

    NASA Astrophysics Data System (ADS)

    Regensburger, Uwe; Graefe, Volker

    1991-03-01

    Object recognition is necessary for any mobile robot operating autonomously in the real world. This paper discusses an object classifier based on a 2-D object model. Obstacle candidates are tracked and analyzed false alarms generated by the object detector are recognized and rejected. The methods have been implemented on a multi-processor system and tested in real-world experiments. They work reliably under favorable conditions but sometimes problems occur e. g. when objects contain many features (edges) or move in front of structured background.

  20. Immersive Visualization of the Solid Earth

    NASA Astrophysics Data System (ADS)

    Kreylos, O.; Kellogg, L. H.

    2017-12-01

    Immersive visualization using virtual reality (VR) display technology offers unique benefits for the visual analysis of complex three-dimensional data such as tomographic images of the mantle and higher-dimensional data such as computational geodynamics models of mantle convection or even planetary dynamos. Unlike "traditional" visualization, which has to project 3D scalar data or vectors onto a 2D screen for display, VR can display 3D data in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection and interfere with interpretation. As a result, researchers can apply their spatial reasoning skills to 3D data in the same way they can to real objects or environments, as well as to complex objects like vector fields. 3D Visualizer is an application to visualize 3D volumetric data, such as results from mantle convection simulations or seismic tomography reconstructions, using VR display technology and a strong focus on interactive exploration. Unlike other visualization software, 3D Visualizer does not present static visualizations, such as a set of cross-sections at pre-selected positions and orientations, but instead lets users ask questions of their data, for example by dragging a cross-section through the data's domain with their hands and seeing data mapped onto that cross-section in real time, or by touching a point inside the data domain, and immediately seeing an isosurface connecting all points having the same data value as the touched point. Combined with tools allowing 3D measurements of positions, distances, and angles, and with annotation tools that allow free-hand sketching directly in 3D data space, the outcome of using 3D Visualizer is not primarily a set of pictures, but derived data to be used for subsequent analysis. 3D Visualizer works best in virtual reality, either in high-end facility-scale environments such as CAVEs, or using commodity low-cost virtual reality headsets such as HTC's Vive. The recent emergence of high-quality commodity VR means that researchers can buy a complete VR system off the shelf, install it and the 3D Visualizer software themselves, and start using it for data analysis immediately.

  1. Sculpting 3D worlds with music: advanced texturing techniques

    NASA Astrophysics Data System (ADS)

    Greuel, Christian; Bolas, Mark T.; Bolas, Niko; McDowall, Ian E.

    1996-04-01

    Sound within the virtual environment is often considered to be secondary to the graphics. In a typical scenario, either audio cues are locally associated with specific 3D objects or a general aural ambiance is supplied in order to alleviate the sterility of an artificial experience. This paper discusses a completely different approach, in which cues are extracted from live or recorded music in order to create geometry and control object behaviors within a computer- generated environment. Advanced texturing techniques used to generate complex stereoscopic images are also discussed. By analyzing music for standard audio characteristics such as rhythm and frequency, information is extracted and repackaged for processing. With the Soundsculpt Toolkit, this data is mapped onto individual objects within the virtual environment, along with one or more predetermined behaviors. Mapping decisions are implemented with a user definable schedule and are based on the aesthetic requirements of directors and designers. This provides for visually active, immersive environments in which virtual objects behave in real-time correlation with the music. The resulting music-driven virtual reality opens up several possibilities for new types of artistic and entertainment experiences, such as fully immersive 3D `music videos' and interactive landscapes for live performance.

  2. Visualization Improves Supraclavicular Access to the Subclavian Vein in a Mixed Reality Simulator.

    PubMed

    Sappenfield, Joshua Warren; Smith, William Brit; Cooper, Lou Ann; Lizdas, David; Gonsalves, Drew B; Gravenstein, Nikolaus; Lampotang, Samsun; Robinson, Albert R

    2018-07-01

    We investigated whether visual augmentation (3D, real-time, color visualization) of a procedural simulator improved performance during training in the supraclavicular approach to the subclavian vein, not as widely known or used as its infraclavicular counterpart. To train anesthesiology residents to access a central vein, a mixed reality simulator with emulated ultrasound imaging was created using an anatomically authentic, 3D-printed, physical mannequin based on a computed tomographic scan of an actual human. The simulator has a corresponding 3D virtual model of the neck and upper chest anatomy. Hand-held instruments such as a needle, an ultrasound probe, and a virtual camera controller are directly manipulated by the trainee and tracked and recorded with submillimeter resolution via miniature, 6 degrees of freedom magnetic sensors. After Institutional Review Board approval, 69 anesthesiology residents and faculty were enrolled and received scripted instructions on how to perform subclavian venous access using the supraclavicular approach based on anatomic landmarks. The volunteers were randomized into 2 cohorts. The first used real-time 3D visualization concurrently with trial 1, but not during trial 2. The second did not use real-time 3D visualization concurrently with trial 1 or 2. However, after trial 2, they observed a 3D visualization playback of trial 2 before performing trial 3 without visualization. An automated scoring system based on time, success, and errors/complications generated objective performance scores. Nonparametric statistical methods were used to compare the scores between subsequent trials, differences between groups (real-time visualization versus no visualization versus delayed visualization), and improvement in scores between trials within groups. Although the real-time visualization group demonstrated significantly better performance than the delayed visualization group on trial 1 (P = .01), there was no difference in gain scores, between performance on the first trial and performance on the final trial, that were dependent on group (P = .13). In the delayed visualization group, the difference in performance between trial 1 and trial 2 was not significant (P = .09); reviewing performance on trial 2 before trial 3 resulted in improved performance when compared to trial 1 (P < .0001). There was no significant difference in median scores (P = .13) between the real-time visualization and delayed visualization groups for the last trial after both groups had received visualization. Participants reported a significant improvement in confidence in performing supraclavicular access to the subclavian vein. Standard deviations of scores, a measure of performance variability, decreased in the delayed visualization group after viewing the visualization. Real-time visual augmentation (3D visualization) in the mixed reality simulator improved performance during supraclavicular access to the subclavian vein. No difference was seen in the final trial of the group that received real-time visualization compared to the group that had delayed visualization playback of their prior attempt. Training with the mixed reality simulator improved participant confidence in performing an unfamiliar technique.

  3. Using Distance Sensors to Perform Collision Avoidance Maneuvres on Uav Applications

    NASA Astrophysics Data System (ADS)

    Raimundo, A.; Peres, D.; Santos, N.; Sebastião, P.; Souto, N.

    2017-08-01

    The Unmanned Aerial Vehicles (UAV) and its applications are growing for both civilian and military purposes. The operability of an UAV proved that some tasks and operations can be done easily and at a good cost-efficiency ratio. Nowadays, an UAV can perform autonomous missions. It is very useful to certain UAV applications, such as meteorology, vigilance systems, agriculture, environment mapping and search and rescue operations. One of the biggest problems that an UAV faces is the possibility of collision with other objects in the flight area. To avoid this, an algorithm was developed and implemented in order to prevent UAV collision with other objects. "Sense and Avoid" algorithm was developed as a system for UAVs to avoid objects in collision course. This algorithm uses a Light Detection and Ranging (LiDAR), to detect objects facing the UAV in mid-flights. This light sensor is connected to an on-board hardware, Pixhawk's flight controller, which interfaces its communications with another hardware: Raspberry Pi. Communications between Ground Control Station and UAV are made via Wi-Fi or cellular third or fourth generation (3G/4G). Some tests were made in order to evaluate the "Sense and Avoid" algorithm's overall performance. These tests were done in two different environments: A 3D simulated environment and a real outdoor environment. Both modes worked successfully on a simulated 3D environment, and "Brake" mode on a real outdoor, proving its concepts.

  4. D Photographs in Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.; Kiel, St.

    2013-07-01

    This paper on providing "oo-information" (= objective object-information) on cultural monuments and sites, based on 3D photographs is also a contribution of CIPA task group 3 to the 2013 CIPA Symposium in Strasbourg. To stimulate the interest in 3D photography for scientists as well as for amateurs, 3D-Masterpieces are presented. Exemplary it is shown, due to their high documentary value ("near reality"), 3D photography support, e.g. the recording, the visualization, the interpretation, the preservation and the restoration of architectural and archaeological objects. This also includes samples for excavation documentation, 3D coordinate calculation, 3D photographs applied for virtual museum purposes and as educational tools. In addition 3D photography is used for virtual museum purposes, as well as an educational tool and for spatial structure enhancement, which in particular holds for inscriptions and in rock arts. This paper is also an invitation to participate in a systematic survey on existing international archives of 3D photographs. In this respect it is also reported on first results, to define an optimum digitization rate for analog stereo views. It is more than overdue, in addition to the access to international archives for 3D photography, the available 3D photography data should appear in a global GIS(cloud)-system, like on, e.g., google earth. This contribution also deals with exposing new 3D photographs to document monuments of importance for Cultural Heritage, including the use of 3D and single lense cameras from a 10m telescope staff, to be used for extremely low earth based airborne 3D photography, as well as for "underwater staff photography". In addition it is reported on the use of captive balloon and drone platforms for 3D photography in Cultural Heritage. It is liked to emphasize, the still underestimated 3D effect on real objects even allows, e.g., the spatial perception of extremely small scratches as well as of nuances in color differences. Though 3D photographs are a well established basic photographic and photogrammetric tool, they are still a matter of research and practical improvement: - For example, multistage concepts for 3D heritage photographs, e.g., combining before and aft images and images showing different focus, daytime etc., as well as combining 3D imagery of different sensors and comparing 3D imagery with drawings etc. and even standards for exposing and processing 3D heritage photographs are only some topics for recent research. - To advise on state-of-the-art 3D visualisation methodology for Cultural heritage purposes an updated synoptically overview, even claiming completeness, also will be dealt with. - 3D photographs increasingly should replace old fashioned subjective interpreted manual 2D drawings (in 2D only) of heritage monuments. - Currently we are witnesses of early developments, showing Cultural Heritage objects in 3D crystal as well as in 3D printings.

  5. FIMic: design for ultimate 3D-integral microscopy of in-vivo biological samples

    PubMed Central

    Scrofani, G.; Sola-Pikabea, J.; Llavador, A.; Sanchez-Ortiga, E.; Barreiro, J. C.; Saavedra, G.; Garcia-Sucerquia, J.; Martínez-Corral, M.

    2017-01-01

    In this work, Fourier integral microscope (FIMic), an ultimate design of 3D-integral microscopy, is presented. By placing a multiplexing microlens array at the aperture stop of the microscope objective of the host microscope, FIMic shows extended depth of field and enhanced lateral resolution in comparison with regular integral microscopy. As FIMic directly produces a set of orthographic views of the 3D-micrometer-sized sample, it is suitable for real-time imaging. Following regular integral-imaging reconstruction algorithms, a 2.75-fold enhanced depth of field and 2-time better spatial resolution in comparison with conventional integral microscopy is reported. Our claims are supported by theoretical analysis and experimental images of a resolution test target, cotton fibers, and in-vivo 3D-imaging of biological specimens. PMID:29359107

  6. The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors

    NASA Astrophysics Data System (ADS)

    Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.

    2015-12-01

    Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and the most popular ones in each category were selected (Arc 3D, Visual SfM, Sure, Agisoft). Also four small objects with distinct geometric properties and especial complexities were chosen and their accurate models as reliable true data was created using ATOS Compact Scan 2M 3D scanner. Images were taken using Fujifilm Real 3D stereo camera, Apple iPhone 5 and Nikon D3200 professional camera and three dimensional models of the objects were obtained using each of the software. Finally, a comprehensive comparison between the detailed reviews of the results on the data set showed that the best combination of software and sensors for generating three-dimensional models is directly related to the object shape as well as the expected accuracy of the final model. Generally better quantitative and qualitative results were obtained by using the Nikon D3200 professional camera, while Fujifilm Real 3D stereo camera and Apple iPhone 5 were the second and third respectively in this comparison. On the other hand, three software of Visual SfM, Sure and Agisoft had a hard competition to achieve the most accurate and complete model of the objects and the best software was different according to the geometric properties of the object.

  7. Study of texture stitching in 3D modeling of lidar point cloud based on per-pixel linear interpolation along loop line buffer

    NASA Astrophysics Data System (ADS)

    Xu, Jianxin; Liang, Hong

    2013-07-01

    Terrestrial laser scanning creates a point cloud composed of thousands or millions of 3D points. Through pre-processing, generating TINs, mapping texture, a 3D model of a real object is obtained. When the object is too large, the object is separated into some parts. This paper mainly focuses on problem of gray uneven of two adjacent textures' intersection. The new algorithm is presented in the paper, which is per-pixel linear interpolation along loop line buffer .The experiment data derives from point cloud of stone lion which is situated in front of west gate of Henan Polytechnic University. The model flow is composed of three parts. First, the large object is separated into two parts, and then each part is modeled, finally the whole 3D model of the stone lion is composed of two part models. When the two part models are combined, there is an obvious fissure line in the overlapping section of two adjacent textures for the two models. Some researchers decrease brightness value of all pixels for two adjacent textures by some algorithms. However, some algorithms are effect and the fissure line still exists. Gray uneven of two adjacent textures is dealt by the algorithm in the paper. The fissure line in overlapping section textures is eliminated. The gray transition in overlapping section become more smoothly.

  8. Structured Light-Based 3D Reconstruction System for Plants.

    PubMed

    Nguyen, Thuy Tuong; Slaughter, David C; Max, Nelson; Maloof, Julin N; Sinha, Neelima

    2015-07-29

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance.

  9. fVisiOn: 360-degree viewable glasses-free tabletop 3D display composed of conical screen and modular projector arrays.

    PubMed

    Yoshida, Shunsuke

    2016-06-13

    A novel glasses-free tabletop 3D display to float virtual objects on a flat tabletop surface is proposed. This method employs circularly arranged projectors and a conical rear-projection screen that serves as an anisotropic diffuser. Its practical implementation installs them beneath a round table and produces horizontal parallax in a circumferential direction without the use of high speed or a moving apparatus. Our prototype can display full-color, 5-cm-tall 3D characters on the table. Multiple viewers can share and enjoy its real-time animation from any angle of 360 degrees with appropriate perspectives as if the animated figures were present.

  10. Fast and flexible 3D object recognition solutions for machine vision applications

    NASA Astrophysics Data System (ADS)

    Effenberger, Ira; Kühnle, Jens; Verl, Alexander

    2013-03-01

    In automation and handling engineering, supplying work pieces between different stages along the production process chain is of special interest. Often the parts are stored unordered in bins or lattice boxes and hence have to be separated and ordered for feeding purposes. An alternative to complex and spacious mechanical systems such as bowl feeders or conveyor belts, which are typically adapted to the parts' geometry, is using a robot to grip the work pieces out of a bin or from a belt. Such applications are in need of reliable and precise computer-aided object detection and localization systems. For a restricted range of parts, there exists a variety of 2D image processing algorithms that solve the recognition problem. However, these methods are often not well suited for the localization of randomly stored parts. In this paper we present a fast and flexible 3D object recognizer that localizes objects by identifying primitive features within the objects. Since technical work pieces typically consist to a substantial degree of geometric primitives such as planes, cylinders and cones, such features usually carry enough information in order to determine the position of the entire object. Our algorithms use 3D best-fitting combined with an intelligent data pre-processing step. The capability and performance of this approach is shown by applying the algorithms to real data sets of different industrial test parts in a prototypical bin picking demonstration system.

  11. Grasping trajectories in a virtual environment adhere to Weber's law.

    PubMed

    Ozana, Aviad; Berman, Sigal; Ganel, Tzvi

    2018-06-01

    Virtual-reality and telerobotic devices simulate local motor control of virtual objects within computerized environments. Here, we explored grasping kinematics within a virtual environment and tested whether, as in normal 3D grasping, trajectories in the virtual environment are performed analytically, violating Weber's law with respect to object's size. Participants were asked to grasp a series of 2D objects using a haptic system, which projected their movements to a virtual space presented on a computer screen. The apparatus also provided object-specific haptic information upon "touching" the edges of the virtual targets. The results showed that grasping movements performed within the virtual environment did not produce the typical analytical trajectory pattern obtained during 3D grasping. Unlike as in 3D grasping, grasping trajectories in the virtual environment adhered to Weber's law, which indicates relative resolution in size processing. In addition, the trajectory patterns differed from typical trajectories obtained during 3D grasping, with longer times to complete the movement, and with maximum grip apertures appearing relatively early in the movement. The results suggest that grasping movements within a virtual environment could differ from those performed in real space, and are subjected to irrelevant effects of perceptual information. Such atypical pattern of visuomotor control may be mediated by the lack of complete transparency between the interface and the virtual environment in terms of the provided visual and haptic feedback. Possible implications of the findings to movement control within robotic and virtual environments are further discussed.

  12. Marginal space learning for efficient detection of 2D/3D anatomical structures in medical images.

    PubMed

    Zheng, Yefeng; Georgescu, Bogdan; Comaniciu, Dorin

    2009-01-01

    Recently, marginal space learning (MSL) was proposed as a generic approach for automatic detection of 3D anatomical structures in many medical imaging modalities [1]. To accurately localize a 3D object, we need to estimate nine pose parameters (three for position, three for orientation, and three for anisotropic scaling). Instead of exhaustively searching the original nine-dimensional pose parameter space, only low-dimensional marginal spaces are searched in MSL to improve the detection speed. In this paper, we apply MSL to 2D object detection and perform a thorough comparison between MSL and the alternative full space learning (FSL) approach. Experiments on left ventricle detection in 2D MRI images show MSL outperforms FSL in both speed and accuracy. In addition, we propose two novel techniques, constrained MSL and nonrigid MSL, to further improve the efficiency and accuracy. In many real applications, a strong correlation may exist among pose parameters in the same marginal spaces. For example, a large object may have large scaling values along all directions. Constrained MSL exploits this correlation for further speed-up. The original MSL only estimates the rigid transformation of an object in the image, therefore cannot accurately localize a nonrigid object under a large deformation. The proposed nonrigid MSL directly estimates the nonrigid deformation parameters to improve the localization accuracy. The comparison experiments on liver detection in 226 abdominal CT volumes demonstrate the effectiveness of the proposed methods. Our system takes less than a second to accurately detect the liver in a volume.

  13. Ames Lab 101: Real-Time 3D Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Song

    2010-08-02

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  14. Ames Lab 101: Real-Time 3D Imaging

    ScienceCinema

    Zhang, Song

    2017-12-22

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  15. 3D interactive augmented reality-enhanced digital learning systems for mobile devices

    NASA Astrophysics Data System (ADS)

    Feng, Kai-Ten; Tseng, Po-Hsuan; Chiu, Pei-Shuan; Yang, Jia-Lin; Chiu, Chun-Jie

    2013-03-01

    With enhanced processing capability of mobile platforms, augmented reality (AR) has been considered a promising technology for achieving enhanced user experiences (UX). Augmented reality is to impose virtual information, e.g., videos and images, onto a live-view digital display. UX on real-world environment via the display can be e ectively enhanced with the adoption of interactive AR technology. Enhancement on UX can be bene cial for digital learning systems. There are existing research works based on AR targeting for the design of e-learning systems. However, none of these work focuses on providing three-dimensional (3-D) object modeling for en- hanced UX based on interactive AR techniques. In this paper, the 3-D interactive augmented reality-enhanced learning (IARL) systems will be proposed to provide enhanced UX for digital learning. The proposed IARL systems consist of two major components, including the markerless pattern recognition (MPR) for 3-D models and velocity-based object tracking (VOT) algorithms. Realistic implementation of proposed IARL system is conducted on Android-based mobile platforms. UX on digital learning can be greatly improved with the adoption of proposed IARL systems.

  16. Self-expressive Dictionary Learning for Dynamic 3D Reconstruction.

    PubMed

    Zheng, Enliang; Ji, Dinghuang; Dunn, Enrique; Frahm, Jan-Michael

    2017-08-22

    We target the problem of sparse 3D reconstruction of dynamic objects observed by multiple unsynchronized video cameras with unknown temporal overlap. To this end, we develop a framework to recover the unknown structure without sequencing information across video sequences. Our proposed compressed sensing framework poses the estimation of 3D structure as the problem of dictionary learning, where the dictionary is defined as an aggregation of the temporally varying 3D structures. Given the smooth motion of dynamic objects, we observe any element in the dictionary can be well approximated by a sparse linear combination of other elements in the same dictionary (i.e. self-expression). Our formulation optimizes a biconvex cost function that leverages a compressed sensing formulation and enforces both structural dependency coherence across video streams, as well as motion smoothness across estimates from common video sources. We further analyze the reconstructability of our approach under different capture scenarios, and its comparison and relation to existing methods. Experimental results on large amounts of synthetic data as well as real imagery demonstrate the effectiveness of our approach.

  17. Streamlined, Inexpensive 3D Printing of the Brain and Skull.

    PubMed

    Naftulin, Jason S; Kimchi, Eyal Y; Cash, Sydney S

    2015-01-01

    Neuroimaging technologies such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) collect three-dimensional data (3D) that is typically viewed on two-dimensional (2D) screens. Actual 3D models, however, allow interaction with real objects such as implantable electrode grids, potentially improving patient specific neurosurgical planning and personalized clinical education. Desktop 3D printers can now produce relatively inexpensive, good quality prints. We describe our process for reliably generating life-sized 3D brain prints from MRIs and 3D skull prints from CTs. We have integrated a standardized, primarily open-source process for 3D printing brains and skulls. We describe how to convert clinical neuroimaging Digital Imaging and Communications in Medicine (DICOM) images to stereolithography (STL) files, a common 3D object file format that can be sent to 3D printing services. We additionally share how to convert these STL files to machine instruction gcode files, for reliable in-house printing on desktop, open-source 3D printers. We have successfully printed over 19 patient brain hemispheres from 7 patients on two different open-source desktop 3D printers. Each brain hemisphere costs approximately $3-4 in consumable plastic filament as described, and the total process takes 14-17 hours, almost all of which is unsupervised (preprocessing = 4-6 hr; printing = 9-11 hr, post-processing = <30 min). Printing a matching portion of a skull costs $1-5 in consumable plastic filament and takes less than 14 hr, in total. We have developed a streamlined, cost-effective process for 3D printing brain and skull models. We surveyed healthcare providers and patients who confirmed that rapid-prototype patient specific 3D models may help interdisciplinary surgical planning and patient education. The methods we describe can be applied for other clinical, research, and educational purposes.

  18. Wave Phase-Sensitive Transformation of 3d-Straining of Mechanical Fields

    NASA Astrophysics Data System (ADS)

    Smirnov, I. N.; Speranskiy, A. A.

    2015-11-01

    It is the area of research of oscillatory processes in elastic mechanical systems. Technical result of innovation is creation of spectral set of multidimensional images which reflect time-correlated three-dimensional vector parameters of metrological, and\\or estimated, and\\or design parameters of oscillations in mechanical systems. Reconstructed images of different dimensionality integrated in various combinations depending on their objective function can be used as homeostatic profile or cybernetic image of oscillatory processes in mechanical systems for an objective estimation of current operational conditions in real time. The innovation can be widely used to enhance the efficiency of monitoring and research of oscillation processes in mechanical systems (objects) in construction, mechanical engineering, acoustics, etc. Concept method of vector vibrometry based on application of vector 3D phase- sensitive vibro-transducers permits unique evaluation of real stressed-strained states of power aggregates and loaded constructions and opens fundamental innovation opportunities: conduct of continuous (on-line regime) reliable monitoring of turboagregates of electrical machines, compressor installations, bases, supports, pipe-lines and other objects subjected to damaging effect of vibrations; control of operational safety of technical systems at all the stages of life cycle including design, test production, tuning, testing, operational use, repairs and resource enlargement; creation of vibro-diagnostic systems of authentic non-destructive control of anisotropic characteristics of materials resistance of power aggregates and loaded constructions under outer effects and operational flaws. The described technology is revolutionary, universal and common for all branches of engineering industry and construction building objects.

  19. Extending DoD Modeling and Simulation with Web 2.0, Ajax and X3D

    DTIC Science & Technology

    2007-09-01

    Supported by Gavin King, who created the well known and industry respected Hibernate , (O/R) Object Relational Mapping tool, which binds Java ...most likely a Hibernate derivative). The preceding is where eBay differs from a pure Java EE specification “by the book” implementation. A truly... Java language has come a long way in providing real world case studies and scalable solutions for the enterprise that are currently in production on

  20. Real-time three-dimensional transesophageal echocardiography in the assessment of mechanical prosthetic mitral valve ring thrombosis.

    PubMed

    Ozkan, Mehmet; Gürsoy, Ozan Mustafa; Astarcıoğlu, Mehmet Ali; Gündüz, Sabahattin; Cakal, Beytullah; Karakoyun, Süleyman; Kalçık, Macit; Kahveci, Gökhan; Duran, Nilüfer Ekşi; Yıldız, Mustafa; Cevik, Cihan

    2013-10-01

    Although 2-dimensional (2D) transesophageal echocardiography (TEE) is the gold standard for the diagnosis of prosthetic valve thrombosis, nonobstructive clots located on mitral valve rings can be missed. Real-time 3-dimensional (3D) TEE has incremental value in the visualization of mitral prosthesis. The aim of this study was to investigate the utility of real-time 3D TEE in the diagnosis of mitral prosthetic ring thrombosis. The clinical outcomes of these patients in relation to real-time 3D transesophageal echocardiographic findings were analyzed. Of 1,263 patients who underwent echocardiographic studies, 174 patients (37 men, 137 women) with mitral ring thrombosis detected by real-time 3D TEE constituted the main study population. Patients were followed prospectively on oral anticoagulation for 25 ± 7 months. Eighty-nine patients (51%) had thrombi that were missed on 2D TEE and depicted only on real-time 3D TEE. The remaining cases were partially visualized with 2D TEE but completely visualized with real-time 3D TEE. Thirty-seven patients (21%) had thromboembolism. The mean thickness of the ring thrombosis in patients with thromboembolism was greater than that in patients without thromboembolism (3.8 ± 0.9 vs 2.8 ± 0.7 mm, p <0.001). One hundred fifty-five patients (89%) underwent real-time 3D TEE during follow-up. There were no thrombi in 39 patients (25%); 45 (29%) had regression of thrombi, and there was no change in thrombus size in 68 patients (44%). Thrombus size increased in 3 patients (2%). Thrombosis was confirmed surgically and histopathologically in 12 patients (7%). In conclusion, real-time 3D TEE can detect prosthetic mitral ring thrombosis that could be missed on 2D TEE and cause thromboembolic events. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Three-Dimensional Color Doppler Echocardiography for Direct Measurement of Vena Contracta Area in Mitral Regurgitation

    PubMed Central

    Little, Stephen H.; Pirat, Bahar; Kumar, Rahul; Igo, Stephen R.; McCulloch, Marti; Hartley, Craig J.; Xu, Jiaqiong; Zoghbi, William A.

    2012-01-01

    OBJECTIVES Our goal was to prospectively compare the accuracy of real-time three-dimensional (3D) color Doppler vena contracta (VC) area and two-dimensional (2D) VC diameter in an in vitro model and in the clinical assessment of mitral regurgitation (MR) severity. BACKGROUND Real-time 3D color Doppler allows direct measurement of VC area and may be more accurate for assessment of MR than the conventional VC diameter measurement by 2D color Doppler. METHODS Using a circulatory loop with an incorporated imaging chamber, various pulsatile flow rates of MR were driven through 4 differently sized orifices. In a clinical study of patients with at least mild MR, regurgitation severity was assessed quantitatively using Doppler-derived effective regurgitant orifice area (EROA), and semiquantitatively as recommended by the American Society of Echocardiography. We describe a step-by-step process to accurately identify the 3D-VC area and compare that measure against known orifice areas (in vitro study) and EROA (clinical study). RESULTS In vitro, 3D-VC area demonstrated the strongest correlation with known orifice area (r = 0.92, p < 0.001), whereas 2D-VC diameter had a weak correlation with orifice area (r = 0.56, p = 0.01). In a clinical study of 61 patients, 3D-VC area correlated with Doppler-derived EROA (r = 0.85, p < 0.001); the relation was stronger than for 2D-VC diameter (r = 0.67, p < 0.001). The advantage of 3D-VC area over 2D-VC diameter was more pronounced in eccentric jets (r = 0.87, p < 0.001 vs. r = 0.6, p < 0.001, respectively) and in moderate-to-severe or severe MR (r = 0.80, p < 0.001 vs. r = 0.18, p = 0.4, respectively). CONCLUSIONS Measurement of VC area is feasible with real-time 3D color Doppler and provides a simple parameter that accurately reflects MR severity, particularly in eccentric and clinically significant MR where geometric assumptions may be challenging. PMID:19356505

  2. Evaluating methods for controlling depth perception in stereoscopic cinematography

    NASA Astrophysics Data System (ADS)

    Sun, Geng; Holliman, Nick

    2009-02-01

    Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography. We anticipate the results will be of particular interest to 3D filmmaking and real time computer games.

  3. Filling gaps in cultural heritage documentation by 3D photography

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.

    2015-08-01

    This contribution promotes 3D photography as an important tool to obtain objective object information. Keeping mainly in mind World Heritage documentation as well as Heritage protection, it is another intention of this paper, to stimulate the interest in applications of 3D photography for professionals as well as for amateurs. In addition this is also an activity report of the international CIPA task group 3. The main part of this paper starts with "Digging the treasure of existing international 3D photography". This does not only belong to tangible but also to intangible Cultural Heritage. 3D photography clearly supports the recording, the visualization, the preservation and the restoration of architectural and archaeological objects. Therefore the use of 3D photography in C.H. should increase on an international level. The presented samples in 3D represent a voluminous, almost partly "forgotten treasure" of international archives for 3D photography. The next chapter is on "Promoting new 3D photography in Cultural Heritage". Though 3D photographs are a well-established basic photographic and photogrammetric tool, even suited to provide "near real" documentation, they are still a matter of research and improvement. Beside the use of 3D cameras even single lenses cameras are very much suited for photographic 3D documentation purposes in Cultural Heritage. Currently at the Faculty of Civil Engineering of the University of Applied Sciences Magdeburg-Stendal, low altitude aerial photography is exposed from a maximum height of 13m, using a hand hold carbon telescope rod. The use of this "huge selfie stick" is also an (international) recommendation, to expose high resolution 3D photography of monuments under expedition conditions. In addition to the carbon rod recently a captive balloon and a hexacopter UAV- platform is in use, mainly to take better synoptically (extremely low altitude, ground truth) aerial photography. Additional experiments with respect to "easy geometry" and to multistage concepts of 3D photographs in Cultural Heritage just started. Furthermore a revised list of the 3D visualization principles, claiming completeness, has been carried out. Beside others in an outlook *It is highly recommended, to list every historical and current stereo view with relevance to Cultural Heritage in a global Monument Information System (MIS), like in google earth. *3D photographs seem to be very suited, to complete and/or at least partly to replace manual archaeological sketches. In this concern the still underestimated 3D effect will be demonstrated, which even allows, e.g., the spatial perception of extremely small scratches etc... *A consequent dealing with 3D Technology even seems to indicate, currently we experience the beginning of a new age of "real 3DPC- screens", which at least could add or even partly replace the conventional 2D screens. Here the spatial visualization is verified without glasses in an all-around vitreous body. In this respect nowadays widespread lasered crystals showing monuments are identified as "Early Bird" 3D products, which, due to low resolution and contrast and due to lack of color, currently might even remember to the status of the invention of photography by Niepce (1827), but seem to promise a great future also in 3D Cultural Heritage documentation. *Last not least 3D printers more and more seem to conquer the IT-market, obviously showing an international competition.

  4. Estimation of 3D shape from image orientations.

    PubMed

    Fleming, Roland W; Holtmann-Rice, Daniel; Bülthoff, Heinrich H

    2011-12-20

    One of the main functions of vision is to estimate the 3D shape of objects in our environment. Many different visual cues, such as stereopsis, motion parallax, and shading, are thought to be involved. One important cue that remains poorly understood comes from surface texture markings. When a textured surface is slanted in 3D relative to the observer, the surface patterns appear compressed in the retinal image, providing potentially important information about 3D shape. What is not known, however, is how the brain actually measures this information from the retinal image. Here, we explain how the key information could be extracted by populations of cells tuned to different orientations and spatial frequencies, like those found in the primary visual cortex. To test this theory, we created stimuli that selectively stimulate such cell populations, by "smearing" (filtering) images of 2D random noise into specific oriented patterns. We find that the resulting patterns appear vividly 3D, and that increasing the strength of the orientation signals progressively increases the sense of 3D shape, even though the filtering we apply is physically inconsistent with what would occur with a real object. This finding suggests we have isolated key mechanisms used by the brain to estimate shape from texture. Crucially, we also find that adapting the visual system's orientation detectors to orthogonal patterns causes unoriented random noise to look like a specific 3D shape. Together these findings demonstrate a crucial role of orientation detectors in the perception of 3D shape.

  5. Research on Visualization of Ground Laser Radar Data Based on Osg

    NASA Astrophysics Data System (ADS)

    Huang, H.; Hu, C.; Zhang, F.; Xue, H.

    2018-04-01

    Three-dimensional (3D) laser scanning is a new advanced technology integrating light, machine, electricity, and computer technologies. It can conduct 3D scanning to the whole shape and form of space objects with high precision. With this technology, you can directly collect the point cloud data of a ground object and create the structure of it for rendering. People use excellent 3D rendering engine to optimize and display the 3D model in order to meet the higher requirements of real time realism rendering and the complexity of the scene. OpenSceneGraph (OSG) is an open source 3D graphics engine. Compared with the current mainstream 3D rendering engine, OSG is practical, economical, and easy to expand. Therefore, OSG is widely used in the fields of virtual simulation, virtual reality, science and engineering visualization. In this paper, a dynamic and interactive ground LiDAR data visualization platform is constructed based on the OSG and the cross-platform C++ application development framework Qt. In view of the point cloud data of .txt format and the triangulation network data file of .obj format, the functions of 3D laser point cloud and triangulation network data display are realized. It is proved by experiments that the platform is of strong practical value as it is easy to operate and provides good interaction.

  6. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    NASA Astrophysics Data System (ADS)

    Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting

    2002-12-01

    This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  7. "SMALLab": Virtual Geology Studies Using Embodied Learning with Motion, Sound, and Graphics

    ERIC Educational Resources Information Center

    Johnson-Glenberg, Mina C.; Birchfield, David; Usyal, Sibel

    2009-01-01

    We present a new and innovative interface that allows the learner's body to move freely in a multimodal learning environment. The Situated Multimedia Arts Learning Laboratory ("SMALLab") uses 3D object tracking, real time graphics, and surround-sound to enhance embodied learning. Our hypothesis is that optimal learning and retention occur when…

  8. A Sketching Interface for Freeform 3D Modeling

    NASA Astrophysics Data System (ADS)

    Igarashi, Takeo

    This chapter introduces Teddy, a sketch-based modeling system to quickly and easily design freeform models such as stuffed animals and other rotund objects. The user draws several 2D freeform strokes interactively on the screen and the system automatically constructs plausible 3D polygonal surfaces. Our system supports several modeling operations, including the operation to construct a 3D polygonal surface from a 2D silhouette drawn by the user: it inflates the region surrounded by the silhouette making a wide area fat, and a narrow area thin. Teddy, our prototype system, is implemented as a Java program, and the mesh construction is done in real-time on a standard PC. Our informal user study showed that a first-time user masters the operations within 10 minutes, and can construct interesting 3D models within minutes. We also report the result of a case study where a high school teacher taught various 3D concepts in geography using the system.

  9. The influence of autostereoscopic 3D displays on subsequent task performance

    NASA Astrophysics Data System (ADS)

    Barkowsky, Marcus; Le Callet, Patrick

    2010-02-01

    Viewing 3D content on an autostereoscopic is an exciting experience. This is partly due to the fact that the 3D effect is seen without glasses. Nevertheless, it is an unnatural condition for the eyes as the depth effect is created by the disparity of the left and the right view on a flat screen instead of having a real object at the corresponding location. Thus, it may be more tiring to watch 3D than 2D. This question is investigated in this contribution by a subjective experiment. A search task experiment is conducted and the behavior of the participants is recorded with an eyetracker. Several indicators both for low level perception as well as for the task performance itself are evaluated. In addition two optometric tests are performed. A verification session with conventional 2D viewing is included. The results are discussed in detail and it can be concluded that the 3D viewing does not have a negative impact on the task performance used in the experiment.

  10. Transfer of learning between 2D and 3D sources during infancy: Informing theory and practice

    PubMed Central

    Barr, Rachel

    2010-01-01

    The ability to transfer learning across contexts is an adaptive skill that develops rapidly during early childhood. Learning from television is a specific instance of transfer of learning between a 2-Dimensional (2D) representation and a 3-Dimensional (3D) object. Understanding the conditions under which young children might accomplish this particular kind of transfer is important because by 2 years of age 90% of US children are viewing television on a daily basis. Recent research shows that children can imitate actions presented on television using the corresponding real-world objects, but this same research also shows that children learn less from television than they do from live demonstrations until they are at least 3 years old; termed the video deficit effect. At present, there is no coherent theory to account for the video deficit effect; how learning is disrupted by this change in context is poorly understood. The aims of the present review are (1) to review the conditions under which children transfer learning between 2D images and 3D objects during early childhood, and (2) to integrate developmental theories of memory processing into the transfer of learning from media literature using Hayne’s (2004) developmental representational flexibility account. The review will conclude that studies on the transfer of learning between 2D and 3D sources have important theoretical implications for general developmental theories of cognitive development, and in particular the development of a flexible representational system, as well as policy implications for early education regarding the potential use and limitations of media as effective teaching tools during early childhood. PMID:20563302

  11. Introductory review on `Flying Triangulation': a motion-robust optical 3D measurement principle

    NASA Astrophysics Data System (ADS)

    Ettl, Svenja

    2015-04-01

    'Flying Triangulation' (FlyTri) is a recently developed principle which allows for a motion-robust optical 3D measurement of rough surfaces. It combines a simple sensor with sophisticated algorithms: a single-shot sensor acquires 2D camera images. From each camera image, a 3D profile is generated. The series of 3D profiles generated are aligned to one another by algorithms, without relying on any external tracking device. It delivers real-time feedback of the measurement process which enables an all-around measurement of objects. The principle has great potential for small-space acquisition environments, such as the measurement of the interior of a car, and motion-sensitive measurement tasks, such as the intraoral measurement of teeth. This article gives an overview of the basic ideas and applications of FlyTri. The main challenges and their solutions are discussed. Measurement examples are also given to demonstrate the potential of the measurement principle.

  12. Modeling Images of Natural 3D Surfaces: Overview and Potential Applications

    NASA Technical Reports Server (NTRS)

    Jalobeanu, Andre; Kuehnel, Frank; Stutz, John

    2004-01-01

    Generative models of natural images have long been used in computer vision. However, since they only describe the of 2D scenes, they fail to capture all the properties of the underlying 3D world. Even though such models are sufficient for many vision tasks a 3D scene model is when it comes to inferring a 3D object or its characteristics. In this paper, we present such a generative model, incorporating both a multiscale surface prior model for surface geometry and reflectance, and an image formation process model based on realistic rendering, the computation of the posterior model parameter densities, and on the critical aspects of the rendering. We also how to efficiently invert the model within a Bayesian framework. We present a few potential applications, such as asteroid modeling and Planetary topography recovery, illustrated by promising results on real images.

  13. Tomographic active optical trapping of arbitrarily shaped objects by exploiting 3D refractive index maps

    NASA Astrophysics Data System (ADS)

    Kim, Kyoohyun; Park, Yongkeun

    2017-05-01

    Optical trapping can manipulate the three-dimensional (3D) motion of spherical particles based on the simple prediction of optical forces and the responding motion of samples. However, controlling the 3D behaviour of non-spherical particles with arbitrary orientations is extremely challenging, due to experimental difficulties and extensive computations. Here, we achieve the real-time optical control of arbitrarily shaped particles by combining the wavefront shaping of a trapping beam and measurements of the 3D refractive index distribution of samples. Engineering the 3D light field distribution of a trapping beam based on the measured 3D refractive index map of samples generates a light mould, which can manipulate colloidal and biological samples with arbitrary orientations and/or shapes. The present method provides stable control of the orientation and assembly of arbitrarily shaped particles without knowing a priori information about the sample geometry. The proposed method can be directly applied in biophotonics and soft matter physics.

  14. A Demonstration of ‘Broken’ Visual Space

    PubMed Central

    Gilson, Stuart

    2012-01-01

    It has long been assumed that there is a distorted mapping between real and ‘perceived’ space, based on demonstrations of systematic errors in judgements of slant, curvature, direction and separation. Here, we have applied a direct test to the notion of a coherent visual space. In an immersive virtual environment, participants judged the relative distance of two squares displayed in separate intervals. On some trials, the virtual scene expanded by a factor of four between intervals although, in line with recent results, participants did not report any noticeable change in the scene. We found that there was no consistent depth ordering of objects that can explain the distance matches participants made in this environment (e.g. A>B>D yet also A

  15. High quality 3D shape reconstruction via digital refocusing and pupil apodization in multi-wavelength holographic interferometry

    NASA Astrophysics Data System (ADS)

    Xu, Li

    Multi-wavelength holographic interferometry (MWHI) has good potential for evolving into a high quality 3D shape reconstruction technique. There are several remaining challenges, including I) depth-of-field limitation, leading to axial dimension inaccuracy of out-of-focus objects; and 2) smearing from shiny smooth objects to their dark dull neighbors, generating fake measurements within the dark area. This research is motivated by the goal of developing an advanced optical metrology system that provides accurate 3D profiles for target object or objects of axial dimension larger than the depth-of-field, and for objects with dramatically different surface conditions. The idea of employing digital refocusing in MWHI has been proposed as a solution to the depth-of-field limitation. One the one hand, traditional single wavelength refocusing formula is revised to reduce sensitivity to wavelength error. Investigation over real example demonstrates promising accuracy and repeatability of reconstructed 3D profiles. On the other hand, a phase contrast based focus detection criterion is developed especially for MWHI, which overcomes the problem of phase unwrapping. The combination for these two innovations gives birth to a systematic strategy of acquiring high quality 3D profiles. Following the first phase contrast based focus detection step, interferometric distance measurement by MWHI is implemented as a next step to conduct relative focus detection with high accuracy. This strategy results in +/-100mm 3D profile with micron level axial accuracy, which is not available in traditional extended focus image (EFI) solutions. Pupil apodization has been implemented to address the second challenge of smearing. The process of reflective rough surface inspection has been mathematically modeled, which explains the origin of stray light and the necessity of replacing hard-edged pupil with one of gradually attenuating transmission (apodization). Metrics to optimize pupil types and parameters have been chosen especially for MWHI. A Gaussian apodized pupil has been installed and tested. A reduction of smearing in measurement result has been experimentally demonstrated.

  16. Real-time time-division color electroholography using a single GPU and a USB module for synchronizing reference light.

    PubMed

    Araki, Hiromitsu; Takada, Naoki; Niwase, Hiroaki; Ikawa, Shohei; Fujiwara, Masato; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2015-12-01

    We propose real-time time-division color electroholography using a single graphics processing unit (GPU) and a simple synchronization system of reference light. To facilitate real-time time-division color electroholography, we developed a light emitting diode (LED) controller with a universal serial bus (USB) module and the drive circuit for reference light. A one-chip RGB LED connected to a personal computer via an LED controller was used as the reference light. A single GPU calculates three computer-generated holograms (CGHs) suitable for red, green, and blue colors in each frame of a three-dimensional (3D) movie. After CGH calculation using a single GPU, the CPU can synchronize the CGH display with the color switching of the one-chip RGB LED via the LED controller. Consequently, we succeeded in real-time time-division color electroholography for a 3D object consisting of around 1000 points per color when an NVIDIA GeForce GTX TITAN was used as the GPU. Furthermore, we implemented the proposed method in various GPUs. The experimental results showed that the proposed method was effective for various GPUs.

  17. Microvascular anastomosis guidance and evaluation using real-time three-dimensional Fourier-domain Doppler optical coherence tomography

    PubMed Central

    Ibrahim, Zuhaib; Tong, Dedi; Zhu, Shan; Mao, Qi; Pang, John; Andrew Lee, Wei Ping; Brandacher, Gerald; Kang, Jin U.

    2013-01-01

    Abstract. Vascular and microvascular anastomoses are critical components of reconstructive microsurgery, vascular surgery, and transplant surgery. Intraoperative surgical guidance using a surgical imaging modality that provides an in-depth view and three-dimensional (3-D) imaging can potentially improve outcome following both conventional and innovative anastomosis techniques. Objective postoperative imaging of the anastomosed vessel can potentially improve the salvage rate when combined with other clinical assessment tools, such as capillary refill, temperature, blanching, and skin turgor. Compared to other contemporary postoperative monitoring modalities—computed tomography angiograms, magnetic resonance (MR) angiograms, and ultrasound Doppler—optical coherence tomography (OCT) is a noninvasive high-resolution (micron-level), high-speed, 3-D imaging modality that has been adopted widely in biomedical and clinical applications. For the first time, to the best of our knowledge, the feasibility of real-time 3-D phase-resolved Doppler OCT (PRDOCT) as an assisted intra- and postoperative imaging modality for microvascular anastomosis of rodent femoral vessels is demonstrated, which will provide new insights and a potential breakthrough to microvascular and supermicrovascular surgery. PMID:23856833

  18. Evaluation of viewing experiences induced by a curved three-dimensional display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul; Yano, Sumio

    2015-10-01

    Despite an increased need for three-dimensional (3-D) functionality in curved displays, comparisons pertinent to human factors between curved and flat panel 3-D displays have rarely been tested. This study compared stereoscopic 3-D viewing experiences induced by a curved display with those of a flat panel display by evaluating subjective and objective measures. Twenty-four participants took part in the experiments and viewed 3-D content with two different displays (flat and curved 3-D display) within a counterbalanced and within-subject design. For the 30-min viewing condition, a paired t-test showed significantly reduced P300 amplitudes, which were caused by engagement rather than cognitive fatigue, in the curved 3-D viewing condition compared to the flat 3-D viewing condition at P3 and P4. No significant differences in P300 amplitudes were observed for 60-min viewing. Subjective ratings of realness and engagement were also significantly higher in the curved 3-D viewing condition than in the flat 3-D viewing condition for 30-min viewing. Our findings support that curved 3-D displays can be effective for enhancing engagement among viewers based on specific viewing times and environments.

  19. A new method for recognizing quadric surfaces from range data and its application to telerobotics and automation

    NASA Technical Reports Server (NTRS)

    Alvertos, Nicolas; Dcunha, Ivan

    1992-01-01

    A feature set of two dimensional curves is obtained after intersecting symmetric objects like spheres, cones, cylinders, ellipsoids, paraboloids, and parallelepipeds with two planes. After determining the location and orientation of the objects in space, these objects are aligned so as to lie on a plane parallel to a suitable coordinate system. These objects are then intersected with a horizontal and a vertical plane. Experiments were carried out with range images of sphere and cylinder. The 3-D discriminant approach was used to recognize quadric surfaces made up of simulated data. Its application to real data was also studied.

  20. Surface sampling techniques for 3D object inspection

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong S.; Gerhardt, Lester A.

    1995-03-01

    While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.

  1. Optical 3D surface digitizing in forensic medicine: 3D documentation of skin and bone injuries.

    PubMed

    Thali, Michael J; Braun, Marcel; Dirnhofer, Richard

    2003-11-26

    Photography process reduces a three-dimensional (3D) wound to a two-dimensional level. If there is a need for a high-resolution 3D dataset of an object, it needs to be three-dimensionally scanned. No-contact optical 3D digitizing surface scanners can be used as a powerful tool for wound and injury-causing instrument analysis in trauma cases. The 3D skin wound and a bone injury documentation using the optical scanner Advanced TOpometric Sensor (ATOS II, GOM International, Switzerland) will be demonstrated using two illustrative cases. Using this 3D optical digitizing method the wounds (the virtual 3D computer model of the skin and the bone injuries) and the virtual 3D model of the injury-causing tool are graphically documented in 3D in real-life size and shape and can be rotated in the CAD program on the computer screen. In addition, the virtual 3D models of the bone injuries and tool can now be compared in a 3D CAD program against one another in virtual space, to see if there are matching areas. Further steps in forensic medicine will be a full 3D surface documentation of the human body and all the forensic relevant injuries using optical 3D scanners.

  2. Forecasting and visualization of wildfires in a 3D geographical information system

    NASA Astrophysics Data System (ADS)

    Castrillón, M.; Jorge, P. A.; López, I. J.; Macías, A.; Martín, D.; Nebot, R. J.; Sabbagh, I.; Quintana, F. M.; Sánchez, J.; Sánchez, A. J.; Suárez, J. P.; Trujillo, A.

    2011-03-01

    This paper describes a wildfire forecasting application based on a 3D virtual environment and a fire simulation engine. A novel open-source framework is presented for the development of 3D graphics applications over large geographic areas, offering high performance 3D visualization and powerful interaction tools for the Geographic Information Systems (GIS) community. The application includes a remote module that allows simultaneous connections of several users for monitoring a real wildfire event. The system is able to make a realistic composition of what is really happening in the area of the wildfire with dynamic 3D objects and location of human and material resources in real time, providing a new perspective to analyze the wildfire information. The user is enabled to simulate and visualize the propagation of a fire on the terrain integrating at the same time spatial information on topography and vegetation types with weather and wind data. The application communicates with a remote web service that is in charge of the simulation task. The user may specify several parameters through a friendly interface before the application sends the information to the remote server responsible of carrying out the wildfire forecasting using the FARSITE simulation model. During the process, the server connects to different external resources to obtain up-to-date meteorological data. The client application implements a realistic 3D visualization of the fire evolution on the landscape. A Level Of Detail (LOD) strategy contributes to improve the performance of the visualization system.

  3. Speckle-based three-dimensional velocity measurement using spatial filtering velocimetry.

    PubMed

    Iversen, Theis F Q; Jakobsen, Michael L; Hanson, Steen G

    2011-04-10

    We present an optical method for measuring the real-time three-dimensional (3D) translational velocity of a diffusely scattering rigid object observed through an imaging system. The method is based on a combination of the motion of random speckle patterns and regular fringe patterns. The speckle pattern is formed in the observation plane of the imaging system due to reflection from an area of the object illuminated by a coherent light source. The speckle pattern translates in response to in-plane translation of the object, and the presence of an angular offset reference wave coinciding with the speckle pattern in the observation plane gives rise to interference, resulting in a fringe pattern that translates in response to the out-of-plane translation of the object. Numerical calculations are performed to evaluate the dynamic properties of the intensity distribution and the response of realistic spatial filters designed to measure the three components of the object's translational velocity. Furthermore, experimental data are presented that demonstrate full 3D velocity measurement. © 2011 Optical Society of America

  4. Research and Technology Development for Construction of 3d Video Scenes

    NASA Astrophysics Data System (ADS)

    Khlebnikova, Tatyana A.

    2016-06-01

    For the last two decades surface information in the form of conventional digital and analogue topographic maps has been being supplemented by new digital geospatial products, also known as 3D models of real objects. It is shown that currently there are no defined standards for 3D scenes construction technologies that could be used by Russian surveying and cartographic enterprises. The issues regarding source data requirements, their capture and transferring to create 3D scenes have not been defined yet. The accuracy issues for 3D video scenes used for measuring purposes can hardly ever be found in publications. Practicability of development, research and implementation of technology for construction of 3D video scenes is substantiated by 3D video scene capability to expand the field of data analysis application for environmental monitoring, urban planning, and managerial decision problems. The technology for construction of 3D video scenes with regard to the specified metric requirements is offered. Technique and methodological background are recommended for this technology used to construct 3D video scenes based on DTM, which were created by satellite and aerial survey data. The results of accuracy estimation of 3D video scenes are presented.

  5. Simulated convective systems using a cloud resolving model: Impact of large-scale temperature and moisture forcing using observations and GEOS-3 reanalysis

    NASA Technical Reports Server (NTRS)

    Shie, C.-L.; Tao, W.-K.; Hou, A.; Lin, X.

    2006-01-01

    The GCE (Goddard Cumulus Ensemble) model, which has been developed and improved at NASA Goddard Space Flight Center over the past two decades, is considered as one of the finer and state-of-the-art CRMs (Cloud Resolving Models) in the research community. As the chosen CRM for a NASA Interdisciplinary Science (IDS) Project, GCE has recently been successfully upgraded into an MPI (Message Passing Interface) version with which great improvement has been achieved in computational efficiency, scalability, and portability. By basically using the large-scale temperature and moisture advective forcing, as well as the temperature, water vapor and wind fields obtained from TRMM (Tropical Rainfall Measuring Mission) field experiments such as SCSMEX (South China Sea Monsoon Experiment) and KWAJEX (Kwajalein Experiment), our recent 2-D and 3-D GCE simulations were able to capture detailed convective systems typical of the targeted (simulated) regions. The GEOS-3 [Goddard EOS (Earth Observing System) Version-3] reanalysis data have also been proposed and successfully implemented for usage in the proposed/performed GCE long-term simulations (i.e., aiming at producing massive simulated cloud data -- Cloud Library) in compensating the scarcity of real field experimental data in both time and space (location). Preliminary 2-D or 3-D pilot results using GEOS-3 data have generally showed good qualitative agreement (yet some quantitative difference) with the respective numerical results using the SCSMEX observations. The first objective of this paper is to ensure the GEOS-3 data quality by comparing the model results obtained from several pairs of simulations using the real observations and GEOS-3 reanalysis data. The different large-scale advective forcing obtained from these two kinds of resources (i.e., sounding observations and GEOS-3 reanalysis) has been considered as a major critical factor in producing various model results. The second objective of this paper is therefore to investigate and present such an impact of large-scale forcing on various modeled quantities (such as hydrometeors, rainfall, and etc.). A third objective is to validate the overall GCE 3-D model performance by comparing the numerical results with sounding observations, as well as available satellite retrievals.

  6. Low-cost real-time 3D PC distributed-interactive-simulation (DIS) application for C4I

    NASA Astrophysics Data System (ADS)

    Gonthier, David L.; Veron, Harry

    1998-04-01

    A 3D Distributed Interactive Simulation (DIS) application was developed and demonstrated in a PC environment. The application is capable of running in the stealth mode or as a player which includes battlefield simulations, such as ModSAF. PCs can be clustered together, but not necessarily collocated, to run a simulation or training exercise on their own. A 3D perspective view of the battlefield is displayed that includes terrain, trees, buildings and other objects supported by the DIS application. Screen update rates of 15 to 20 frames per second have been achieved with fully lit and textured scenes thus providing high quality and fast graphics. A complete PC system can be configured for under $2,500. The software runs under Windows95 and WindowsNT. It is written in C++ and uses a commercial API called RenderWare for 3D rendering. The software uses Microsoft Foundation classes and Microsoft DirectPlay for joystick input. The RenderWare libraries enhance the performance through optimization for MMX and the Pentium Pro processor. The RenderWare and the Righteous 3D graphics board from Orchid Technologies with an advertised rendering rate of up to 2 million texture mapped triangles per second. A low-cost PC DIS simulator that can partake in a real-time collaborative simulation with other platforms is thus achieved.

  7. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    NASA Astrophysics Data System (ADS)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  8. Real-time 3-D X-ray and gamma-ray viewer

    NASA Technical Reports Server (NTRS)

    Yin, L. I. (Inventor)

    1983-01-01

    A multi-pinhole aperture lead screen forms an equal plurality of invisible mini-images having dissimilar perspectives of an X-ray and gamma-ray emitting object (ABC) onto a near-earth phosphor layer. This layer provides visible light mini-images directly into a visible light image intensifier. A viewing screen having an equal number of dissimilar perspective apertures distributed across its face in a geometric pattern identical to the lead screen, provides a viewer with a real, pseudoscopic image (A'B'C') of the object with full horizontal and vertical parallax. Alternatively, a third screen identical to viewing screen and spaced apart from a second visible light image intensifier, may be positioned between the first image intensifier and the viewing screen, thereby providing the viewer with a virtual, orthoscopic image (A"B"C") of the object (ABC) with full horizontal and vertical parallax.

  9. Subjective and objective evaluation of visual fatigue on viewing 3D display continuously

    NASA Astrophysics Data System (ADS)

    Wang, Danli; Xie, Yaohua; Yang, Xinpan; Lu, Yang; Guo, Anxiang

    2015-03-01

    In recent years, three-dimensional (3D) displays become more and more popular in many fields. Although they can provide better viewing experience, they cause extra problems, e.g., visual fatigue. Subjective or objective methods are usually used in discrete viewing processes to evaluate visual fatigue. However, little research combines subjective indicators and objective ones in an entirely continuous viewing process. In this paper, we propose a method to evaluate real-time visual fatigue both subjectively and objectively. Subjects watch stereo contents on a polarized 3D display continuously. Visual Reaction Time (VRT), Critical Flicker Frequency (CFF), Punctum Maximum Accommodation (PMA) and subjective scores of visual fatigue are collected before and after viewing. During the viewing process, the subjects rate the visual fatigue whenever it changes, without breaking the viewing process. At the same time, the blink frequency (BF) and percentage of eye closure (PERCLOS) of each subject is recorded for comparison to a previous research. The results show that the subjective visual fatigue and PERCLOS increase with time and they are greater in a continuous process than a discrete one. The BF increased with time during the continuous viewing process. Besides, the visual fatigue also induced significant changes of VRT, CFF and PMA.

  10. An objective algorithm for reconstructing the three-dimensional ocean temperature field based on Argo profiles and SST data

    NASA Astrophysics Data System (ADS)

    Zhou, Chaojie; Ding, Xiaohua; Zhang, Jie; Yang, Jungang; Ma, Qiang

    2017-12-01

    While global oceanic surface information with large-scale, real-time, high-resolution data is collected by satellite remote sensing instrumentation, three-dimensional (3D) observations are usually obtained from in situ measurements, but with minimal coverage and spatial resolution. To meet the needs of 3D ocean investigations, we have developed a new algorithm to reconstruct the 3D ocean temperature field based on the Array for Real-time Geostrophic Oceanography (Argo) profiles and sea surface temperature (SST) data. The Argo temperature profiles are first optimally fitted to generate a series of temperature functions of depth, with the vertical temperature structure represented continuously. By calculating the derivatives of the fitted functions, the calculation of the vertical temperature gradient of the Argo profiles at an arbitrary depth is accomplished. A gridded 3D temperature gradient field is then found by applying inverse distance weighting interpolation in the horizontal direction. Combined with the processed SST, the 3D temperature field reconstruction is realized below the surface using the gridded temperature gradient. Finally, to confirm the effectiveness of the algorithm, an experiment in the Pacific Ocean south of Japan is conducted, for which a 3D temperature field is generated. Compared with other similar gridded products, the reconstructed 3D temperature field derived by the proposed algorithm achieves satisfactory accuracy, with correlation coefficients of 0.99 obtained, including a higher spatial resolution (0.25° × 0.25°), resulting in the capture of smaller-scale characteristics. Finally, both the accuracy and the superiority of the algorithm are validated.

  11. Thermal Texture Generation and 3d Model Reconstruction Using SFM and Gan

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Mizginov, V. A.

    2018-05-01

    Realistic 3D models with textures representing thermal emission of the object are widely used in such fields as dynamic scene analysis, autonomous driving, and video surveillance. Structure from Motion (SfM) methods provide a robust approach for the generation of textured 3D models in the visible range. Still, automatic generation of 3D models from the infrared imagery is challenging due to an absence of the feature points and low sensor resolution. Recent advances in Generative Adversarial Networks (GAN) have proved that they can perform complex image-to-image transformations such as a transformation of day to night and generation of imagery in a different spectral range. In this paper, we propose a novel method for generation of realistic 3D models with thermal textures using the SfM pipeline and GAN. The proposed method uses visible range images as an input. The images are processed in two ways. Firstly, they are used for point matching and dense point cloud generation. Secondly, the images are fed into a GAN that performs the transformation from the visible range to the thermal range. We evaluate the proposed method using real infrared imagery captured with a FLIR ONE PRO camera. We generated a dataset with 2000 pairs of real images captured in thermal and visible range. The dataset is used to train the GAN network and to generate 3D models using SfM. The evaluation of the generated 3D models and infrared textures proved that they are similar to the ground truth model in both thermal emissivity and geometrical shape.

  12. A real-time robot arm collision avoidance system

    NASA Technical Reports Server (NTRS)

    Shaffer, Clifford A.; Herb, Gregory M.

    1992-01-01

    A data structure and update algorithm are presented for a prototype real-time collision avoidance safety system simulating a multirobot workspace. The data structure is a variant of the octree, which serves as a spatial index. An octree recursively decomposes 3D space into eight equal cubic octants until each octant meets some decomposition criteria. The N-objects octree, which indexes a collection of 3D primitive solids is used. These primitives make up the two (seven-degrees-of-freedom) robot arms and workspace modeled by the system. As robot arms move, the octree is updated to reflect their changed positions. During most update cycles, any given primitive does not change which octree nodes it is in. Thus, modification to the octree is rarely required. Cycle time for interpreting current arm joint angles, updating the octree to reflect new positions, and detecting/reporting imminent collisions averages 30 ms on an Intel 80386 processor running at 20 MHz.

  13. Robust position estimation of a mobile vehicle

    NASA Astrophysics Data System (ADS)

    Conan, Vania; Boulanger, Pierre; Elgazzar, Shadia

    1994-11-01

    The ability to estimate the position of a mobile vehicle is a key task for navigation over large distances in complex indoor environments such as nuclear power plants. Schematics of the plants are available, but they are incomplete, as real settings contain many objects, such as pipes, cables or furniture, that mask part of the model. The position estimation method described in this paper matches 3-D data with a simple schematic of a plant. It is basically independent of odometry information and viewpoint, robust to noisy data and spurious points and largely insensitive to occlusions. The method is based on a hypothesis/verification paradigm and its complexity is polynomial; it runs in (Omicron) (m4n4), where m represents the number of model patches and n the number of scene patches. Heuristics are presented to speed up the algorithm. Results on real 3-D data show good behavior even when the scene is very occluded.

  14. Open-source software platform for medical image segmentation applications

    NASA Astrophysics Data System (ADS)

    Namías, R.; D'Amato, J. P.; del Fresno, M.

    2017-11-01

    Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.

  15. Orthoscopic real-image display of digital holograms.

    PubMed

    Makowski, P L; Kozacki, T; Zaperty, W

    2017-10-01

    We present a practical solution for the long-standing problem of depth inversion in real-image holographic display of digital holograms. It relies on a field lens inserted in front of the spatial light modulator device addressed by a properly processed hologram. The processing algorithm accounts for pixel size and wavelength mismatch between capture and display devices in a way that prevents image deformation. Complete images of large dimensions are observable from one position with a naked eye. We demonstrate the method experimentally on a 10-cm-long 3D object using a single full-HD spatial light modulator, but it can supplement most holographic displays designed to form a real image, including circular wide angle configurations.

  16. Structured Light-Based 3D Reconstruction System for Plants

    PubMed Central

    Nguyen, Thuy Tuong; Slaughter, David C.; Max, Nelson; Maloof, Julin N.; Sinha, Neelima

    2015-01-01

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants.This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance. PMID:26230701

  17. An object-oriented simulator for 3D digital breast tomosynthesis imaging system.

    PubMed

    Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.

  18. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    PubMed Central

    Cengiz, Kubra

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468

  19. 3D shape recovery from image focus using gray level co-occurrence matrix

    NASA Astrophysics Data System (ADS)

    Mahmood, Fahad; Munir, Umair; Mehmood, Fahad; Iqbal, Javaid

    2018-04-01

    Recovering a precise and accurate 3-D shape of the target object utilizing robust 3-D shape recovery algorithm is an ultimate objective of computer vision community. Focus measure algorithm plays an important role in this architecture which convert the color values of each pixel of the acquired 2-D image dataset into corresponding focus values. After convolving the focus measure filter with the input 2-D image dataset, a 3-D shape recovery approach is applied which will recover the depth map. In this document, we are concerned with proposing Gray Level Co-occurrence Matrix along with its statistical features for computing the focus information of the image dataset. The Gray Level Co-occurrence Matrix quantifies the texture present in the image using statistical features and then applies joint probability distributive function of the gray level pairs of the input image. Finally, we quantify the focus value of the input image using Gaussian Mixture Model. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach -in spite of simplicity generates accurate results.

  20. Towards next generation 3D cameras

    NASA Astrophysics Data System (ADS)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (<100 microns resolution) scans in extremely demanding scenarios with low-cost components. Several of these cameras are making a practical impact in industrial automation, being adopted in robotic inspection and assembly systems.

  1. Interactive High-Relief Reconstruction for Organic and Double-Sided Objects from a Photo.

    PubMed

    Yeh, Chih-Kuo; Huang, Shi-Yang; Jayaraman, Pradeep Kumar; Fu, Chi-Wing; Lee, Tong-Yee

    2017-07-01

    We introduce an interactive user-driven method to reconstruct high-relief 3D geometry from a single photo. Particularly, we consider two novel but challenging reconstruction issues: i) common non-rigid objects whose shapes are organic rather than polyhedral/symmetric, and ii) double-sided structures, where front and back sides of some curvy object parts are revealed simultaneously on image. To address these issues, we develop a three-stage computational pipeline. First, we construct a 2.5D model from the input image by user-driven segmentation, automatic layering, and region completion, handling three common types of occlusion. Second, users can interactively mark-up slope and curvature cues on the image to guide our constrained optimization model to inflate and lift up the image layers. We provide real-time preview of the inflated geometry to allow interactive editing. Third, we stitch and optimize the inflated layers to produce a high-relief 3D model. Compared to previous work, we can generate high-relief geometry with large viewing angles, handle complex organic objects with multiple occluded regions and varying shape profiles, and reconstruct objects with double-sided structures. Lastly, we demonstrate the applicability of our method on a wide variety of input images with human, animals, flowers, etc.

  2. Can 3D Gamified Simulations Be Valid Vocational Training Tools for Persons with Intellectual Disability? An Experiment Based on a Real-life Situation.

    PubMed

    von Barnekow, Ariel; Bonet-Codina, Núria; Tost, Dani

    2017-03-23

    To investigate if 3D gamified simulations can be valid vocational training tools for persons with intellectual disability. A 3D gamified simulation composed by a set of training tasks for cleaning in hostelry was developed in collaboration with professionals of a real hostel and pedagogues of a special needs school. The learning objectives focus on the acquisition of vocabulary skills, work procedures, social abilities and risk prevention. Several accessibility features were developed to make the tasks easy to do from a technological point-of-view. A pilot experiment was conducted to test the pedagogical efficacy of this tool on intellectually disabled workers and students. User scores in the gamified simulation follow a curve of increasing progression. When confronted with reality, they recognized the scenario and tried to reproduce what they had learned in the simulation. Finally, they were interested in the tool, they showed a strong feeling of immersion and engagement, and they reported having fun. On the basis of this experiment we believe that 3D gamified simulations can be efficient tools to train social and professional skills of persons with intellectual disabilities contributing thus to foster their social inclusion through work.

  3. An augmented reality tool for learning spatial anatomy on mobile devices.

    PubMed

    Jain, Nishant; Youngblood, Patricia; Hasel, Matthew; Srivastava, Sakti

    2017-09-01

    Augmented Realty (AR) offers a novel method of blending virtual and real anatomy for intuitive spatial learning. Our first aim in the study was to create a prototype AR tool for mobile devices. Our second aim was to complete a technical evaluation of our prototype AR tool focused on measuring the system's ability to accurately render digital content in the real world. We imported Computed Tomography (CT) data derived virtual surface models into a 3D Unity engine environment and implemented an AR algorithm to display these on mobile devices. We investigated the accuracy of the virtual renderings by comparing a physical cube with an identical virtual cube for dimensional accuracy. Our comparative study confirms that our AR tool renders 3D virtual objects with a high level of accuracy as evidenced by the degree of similarity between measurements of the dimensions of a virtual object (a cube) and the corresponding physical object. We developed an inexpensive and user-friendly prototype AR tool for mobile devices that creates highly accurate renderings. This prototype demonstrates an intuitive, portable, and integrated interface for spatial interaction with virtual anatomical specimens. Integrating this AR tool with a library of CT derived surface models provides a platform for spatial learning in the anatomy curriculum. The segmentation methodology implemented to optimize human CT data for mobile viewing can be extended to include anatomical variations and pathologies. The ability of this inexpensive educational platform to deliver a library of interactive, 3D models to students worldwide demonstrates its utility as a supplemental teaching tool that could greatly benefit anatomical instruction. Clin. Anat. 30:736-741, 2017. © 2017Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  4. Grayscale imbalance correction in real-time phase measuring profilometry

    NASA Astrophysics Data System (ADS)

    Zhu, Lin; Cao, Yiping; He, Dawu; Chen, Cheng

    2016-10-01

    Grayscale imbalance correction in real-time phase measuring profilometry (RPMP) is proposed. In the RPMP, the sufficient information is obtained to reconstruct the 3D shape of the measured object in one over twenty-four of a second. Only one color fringe pattern whose R, G and B channels are coded as three sinusoidal phase-shifting gratings with an equivalent shifting phase of 2π/3 is sent to a flash memory on a specialized digital light projector (SDLP). And then the SDLP projects the fringe patterns in R, G and B channels sequentially onto the measured object in one over seventy-two of a second and meanwhile a monochrome CCD camera captures the corresponding deformed patterns synchronously with the SDLP. Because the deformed patterns from three color channels are captured at different time, the color crosstalk is avoided completely. But due to the monochrome CCD camera's different spectral sensitivity to R, G and B tricolor, there will be grayscale imbalance among these deformed patterns captured at R, G and B channels respectively which may result in increasing measuring errors or even failing to reconstruct the 3D shape. So a new grayscale imbalance correction method based on least square method is developed. The experimental results verify the feasibility of the proposed method.

  5. Real-Time Amplitude and Phase Imaging of Optically Opaque Objects by Combining Full-Field Off-Axis Terahertz Digital Holography with Angular Spectrum Reconstruction

    NASA Astrophysics Data System (ADS)

    Yamagiwa, Masatomo; Ogawa, Takayuki; Minamikawa, Takeo; Abdelsalam, Dahi Ghareab; Okabe, Kyosuke; Tsurumachi, Noriaki; Mizutani, Yasuhiro; Iwata, Testuo; Yamamoto, Hirotsugu; Yasui, Takeshi

    2018-06-01

    Terahertz digital holography (THz-DH) has the potential to be used for non-destructive inspection of visibly opaque soft materials due to its good immunity to optical scattering and absorption. Although previous research on full-field off-axis THz-DH has usually been performed using Fresnel diffraction reconstruction, its minimum reconstruction distance occasionally prevents a sample from being placed near a THz imager to increase the signal-to-noise ratio in the hologram. In this article, we apply the angular spectrum method (ASM) for wavefront reconstruction in full-filed off-axis THz-DH because ASM is more accurate at short reconstruction distances. We demonstrate real-time phase imaging of a visibly opaque plastic sample with a phase resolution power of λ/49 at a frame rate of 3.5 Hz in addition to real-time amplitude imaging. We also perform digital focusing of the amplitude image for the same object with a depth selectivity of 447 μm. Furthermore, 3D imaging of visibly opaque silicon objects was achieved with a depth precision of 1.7 μm. The demonstrated results indicate the high potential of the proposed method for in-line or in-process non-destructive inspection of soft materials.

  6. Real-Time Amplitude and Phase Imaging of Optically Opaque Objects by Combining Full-Field Off-Axis Terahertz Digital Holography with Angular Spectrum Reconstruction

    NASA Astrophysics Data System (ADS)

    Yamagiwa, Masatomo; Ogawa, Takayuki; Minamikawa, Takeo; Abdelsalam, Dahi Ghareab; Okabe, Kyosuke; Tsurumachi, Noriaki; Mizutani, Yasuhiro; Iwata, Testuo; Yamamoto, Hirotsugu; Yasui, Takeshi

    2018-04-01

    Terahertz digital holography (THz-DH) has the potential to be used for non-destructive inspection of visibly opaque soft materials due to its good immunity to optical scattering and absorption. Although previous research on full-field off-axis THz-DH has usually been performed using Fresnel diffraction reconstruction, its minimum reconstruction distance occasionally prevents a sample from being placed near a THz imager to increase the signal-to-noise ratio in the hologram. In this article, we apply the angular spectrum method (ASM) for wavefront reconstruction in full-filed off-axis THz-DH because ASM is more accurate at short reconstruction distances. We demonstrate real-time phase imaging of a visibly opaque plastic sample with a phase resolution power of λ/49 at a frame rate of 3.5 Hz in addition to real-time amplitude imaging. We also perform digital focusing of the amplitude image for the same object with a depth selectivity of 447 μm. Furthermore, 3D imaging of visibly opaque silicon objects was achieved with a depth precision of 1.7 μm. The demonstrated results indicate the high potential of the proposed method for in-line or in-process non-destructive inspection of soft materials.

  7. An efficient sequential approach to tracking multiple objects through crowds for real-time intelligent CCTV systems.

    PubMed

    Li, Liyuan; Huang, Weimin; Gu, Irene Yu-Hua; Luo, Ruijiang; Tian, Qi

    2008-10-01

    Efficiency and robustness are the two most important issues for multiobject tracking algorithms in real-time intelligent video surveillance systems. We propose a novel 2.5-D approach to real-time multiobject tracking in crowds, which is formulated as a maximum a posteriori estimation problem and is approximated through an assignment step and a location step. Observing that the occluding object is usually less affected by the occluded objects, sequential solutions for the assignment and the location are derived. A novel dominant color histogram (DCH) is proposed as an efficient object model. The DCH can be regarded as a generalized color histogram, where dominant colors are selected based on a given distance measure. Comparing with conventional color histograms, the DCH only requires a few color components (31 on average). Furthermore, our theoretical analysis and evaluation on real data have shown that DCHs are robust to illumination changes. Using the DCH, efficient implementations of sequential solutions for the assignment and location steps are proposed. The assignment step includes the estimation of the depth order for the objects in a dispersing group, one-by-one assignment, and feature exclusion from the group representation. The location step includes the depth-order estimation for the objects in a new group, the two-phase mean-shift location, and the exclusion of tracked objects from the new position in the group. Multiobject tracking results and evaluation from public data sets are presented. Experiments on image sequences captured from crowded public environments have shown good tracking results, where about 90% of the objects have been successfully tracked with the correct identification numbers by the proposed method. Our results and evaluation have indicated that the method is efficient and robust for tracking multiple objects (>or= 3) in complex occlusion for real-world surveillance scenarios.

  8. Human perception considerations for 3D content creation

    NASA Astrophysics Data System (ADS)

    Green, G. Almont

    2011-03-01

    Observation and interviews with people viewing autostereoscopic 3D imagery provides evidence that there are many human perception considerations required for 3D content creation. A study was undertaken whereby it was witnessed that certain test autostereoscopic imagery elicited a highly emotional response and engagement, while other test autostereoscopic imagery was given only a passing glance. That an image can be viewed with a certain level of stereopsis does not make it compelling. By taking into consideration the manner in which humans perceive depth and the space between objects, 3D content can achieve a level of familiarity and realness that is not possible with single perspective imagery. When human perception issues are ignored, 3D imagery can be undesirable to viewers and a negative bias against 3D imagery can occur. The preparation of 3D content is more important than the display technology. Where human perception, as it is used to interpret reality, is not mimicked in the creation of 3D content, the general public typically express a negative bias against that imagery (where choices are provided). For some, the viewing of 3D content that could not exist naturally, induces physical discomfort.

  9. Integrating motion, illumination, and structure in video sequences with applications in illumination-invariant tracking.

    PubMed

    Xu, Yilei; Roy-Chowdhury, Amit K

    2007-05-01

    In this paper, we present a theory for combining the effects of motion, illumination, 3D structure, albedo, and camera parameters in a sequence of images obtained by a perspective camera. We show that the set of all Lambertian reflectance functions of a moving object, at any position, illuminated by arbitrarily distant light sources, lies "close" to a bilinear subspace consisting of nine illumination variables and six motion variables. This result implies that, given an arbitrary video sequence, it is possible to recover the 3D structure, motion, and illumination conditions simultaneously using the bilinear subspace formulation. The derivation builds upon existing work on linear subspace representations of reflectance by generalizing it to moving objects. Lighting can change slowly or suddenly, locally or globally, and can originate from a combination of point and extended sources. We experimentally compare the results of our theory with ground truth data and also provide results on real data by using video sequences of a 3D face and the entire human body with various combinations of motion and illumination directions. We also show results of our theory in estimating 3D motion and illumination model parameters from a video sequence.

  10. A Flexile and High Precision Calibration Method for Binocular Structured Light Scanning System

    PubMed Central

    Yuan, Jianying; Wang, Qiong; Li, Bailin

    2014-01-01

    3D (three-dimensional) structured light scanning system is widely used in the field of reverse engineering, quality inspection, and so forth. Camera calibration is the key for scanning precision. Currently, 2D (two-dimensional) or 3D fine processed calibration reference object is usually applied for high calibration precision, which is difficult to operate and the cost is high. In this paper, a novel calibration method is proposed with a scale bar and some artificial coded targets placed randomly in the measuring volume. The principle of the proposed method is based on hierarchical self-calibration and bundle adjustment. We get initial intrinsic parameters from images. Initial extrinsic parameters in projective space are estimated with the method of factorization and then upgraded to Euclidean space with orthogonality of rotation matrix and rank 3 of the absolute quadric as constraint. Last, all camera parameters are refined through bundle adjustment. Real experiments show that the proposed method is robust, and has the same precision level as the result using delicate artificial reference object, but the hardware cost is very low compared with the current calibration method used in 3D structured light scanning system. PMID:25202736

  11. Passive lighting responsive three-dimensional integral imaging

    NASA Astrophysics Data System (ADS)

    Lou, Yimin; Hu, Juanmei

    2017-11-01

    A three dimensional (3D) integral imaging (II) technique with a real-time passive lighting responsive ability and vivid 3D performance has been proposed and demonstrated. Some novel lighting responsive phenomena, including light-activated 3D imaging, and light-controlled 3D image scaling and translation, have been realized optically without updating images. By switching the on/off state of a point light source illuminated on the proposed II system, the 3D images can show/hide independent of the diffused illumination background. By changing the position or illumination direction of the point light source, the position and magnification of the 3D image can be modulated in real time. The lighting responsive mechanism of the 3D II system is deduced analytically and verified experimentally. A flexible thin film lighting responsive II system with a 0.4 mm thickness was fabricated. This technique gives some additional degrees of freedom in order to design the II system and enable the virtual 3D image to interact with the real illumination environment in real time.

  12. Virtual Boutique: a 3D modeling and content-based management approach to e-commerce

    NASA Astrophysics Data System (ADS)

    Paquet, Eric; El-Hakim, Sabry F.

    2000-12-01

    The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.

  13. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands.

    PubMed

    Mateo, Carlos M; Gil, Pablo; Torres, Fernando

    2016-05-05

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object's surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand's fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.

  14. [Establishment of a 3D finite element model of human skull using MSCT images and mimics software].

    PubMed

    Huang, Ping; Li, Zheng-dong; Shao, Yu; Zou, Dong-hua; Liu, Ning-guo; Li, Li; Chen, Yuan-yuan; Wan, Lei; Chen, Yi-jiu

    2011-02-01

    To establish a human 3D finite element skull model, and to explore its value in biomechanics analysis. The cadaveric head was scanned and then 3D skull model was created using Mimics software based on 2D CT axial images. The 3D skull model was optimized by preprocessor along with creation of the surface and volume meshes. The stress changes, after the head was struck by an object or the head hit the ground directly, were analyzed using ANSYS software. The original 3D skull model showed a large number of triangles with a poor quality and high similarity with the real head, while the optimized model showed high quality surface and volume meshes with a small number of triangles comparatively. The model could show the local and global stress changes effectively. The human 3D skull model can be established using MSCT and Mimics software and provides a good finite element model for biomechanics analysis. This model may also provide a base for the study of head stress changes following different forces.

  15. Identification of pre-impact conditions of a cyclist involved in a vehicle-bicycle accident using an optimized MADYMO reconstruction combined with motion capture.

    PubMed

    Sun, Jie; Li, Zhengdong; Pan, Shaoyou; Feng, Hao; Shao, Yu; Liu, Ningguo; Huang, Ping; Zou, Donghua; Chen, Yijiu

    2018-05-01

    The aim of the present study was to develop an improved method, using MADYMO multi-body simulation software combined with an optimization method and three-dimensional (3D) motion capture, for identifying the pre-impact conditions of a cyclist (walking or cycling) involved in a vehicle-bicycle accident. First, a 3D motion capture system was used to analyze coupled motions of a volunteer while walking and cycling. The motion capture results were used to define the posture of the human model during walking and cycling simulations. Then, cyclist, bicycle and vehicle models were developed. Pre-impact parameters of the models were treated as unknown design variables. Finally, a multi-objective genetic algorithm, the nondominated sorting genetic algorithm II, was used to find optimal solutions. The objective functions of the walk parameter were significantly lower than cycle parameter; thus, the cyclist was more likely to have been walking with the bicycle than riding the bicycle. In the most closely matched result found, all observed contact points matched and the injury parameters correlated well with the real injuries sustained by the cyclist. Based on the real accident reconstruction, the present study indicates that MADYMO multi-body simulation software, combined with an optimization method and 3D motion capture, can be used to identify the pre-impact conditions of a cyclist involved in a vehicle-bicycle accident. Copyright © 2018. Published by Elsevier Ltd.

  16. Analysis of tomographic mineralogical data using YaDiV—Overview and practical case study

    NASA Astrophysics Data System (ADS)

    Friese, Karl-Ingo; Cichy, Sarah B.; Wolter, Franz-Erich; Botcharnikov, Roman E.

    2013-07-01

    We introduce the 3D-segmentation and -visualization software YaDiV to the mineralogical application of rock texture analysis. YaDiV has been originally designed to process medical DICOM datasets. But due to software advancements and additional plugins, this open-source software can now be easily used for the fast quantitative morphological characterization of geological objects from tomographic datasets. In this paper, we give a summary of YaDiV's features and demonstrate the advantages of 3D-stereographic visualization and the accuracy of 3D-segmentation for the analysis of geological samples. For this purpose, we present a virtual and a real use case (here: experimentally crystallized and vesiculated magmatic rocks, corresponding to the composition of the 1991-1995 Unzen eruption, Japan). Especially the spacial representation of structures in YaDiV allows an immediate, intuitive understanding of the 3D-structures, which may not become clear by only looking on 2D-images. We compare our results of object number density calculations with the established classical stereological 3D-correction methods for 2D-images and show that it was possible to achieve a seriously higher quality and accuracy. The methods described in this paper are not dependent on the nature of the object. The fact, that YaDiV is open-source and users with programming skills can create new plugins themselves, may allow this platform to become applicable to a variety of geological scenarios from the analysis of textures in tiny rock samples to the interpretation of global geophysical data, as long as the data are provided in tomographic form.

  17. Ocean Simulation Model for Internal Waves

    DTIC Science & Technology

    1990-08-01

    MODEL1.DAT U D 4°NZ Real U,V,W, (U(i),V(i),W(i), Z Z(i), i= 1 ,NZ) MODEL1.AUX U S 3°NZ Real NT,NX,NZ,DT, NTNX,NZDT, DXDZ,T0, LAT , DX,DZ,T0, LAT , LON,AZ LON...Oceanographic and Atmospheric Research Laboratory, Stennis Space Center, Mississippi 39529-5004. 9 1 .... .... . .. ...8 I I I II I I Foreword The effects of the...I A. Background 1 B. Project Objectives 1 C. Purpose of This Manual 1 II. Background and Derivation of Algorithms 2 A. Stochastic Representation of

  18. Novel 3-D laparoscopic magnetic ultrasound image guidance for lesion targeting

    PubMed Central

    Sindram, David; McKillop, Iain H; Martinie, John B; Iannitti, David A

    2010-01-01

    Objectives: Accurate laparoscopic liver lesion targeting for biopsy or ablation depends on the ability to merge laparoscopic and ultrasound images with proprioceptive instrument positioning, a skill that can be acquired only through extensive experience. The aim of this study was to determine whether using magnetic positional tracking to provide three-dimensional, real-time guidance improves accuracy during laparoscopic needle placement. Methods: Magnetic sensors were embedded into a needle and laparoscopic ultrasound transducer. These sensors interrupted the magnetic fields produced by an electromagnetic field generator, allowing for real-time, 3-D guidance on a stereoscopic monitor. Targets measuring 5 mm were embedded 3–5 cm deep in agar and placed inside a laparoscopic trainer box. Two novices (a college student and an intern) and two experts (hepatopancreatobiliary surgeons) targeted the lesions out of the ultrasound plane using either traditional or 3-D guidance. Results: Each subject targeted 22 lesions, 11 with traditional and 11 with the novel guidance (n = 88). Hit rates of 32% (14/44) and 100% (44/44) were observed with the traditional approach and the 3-D magnetic guidance approach, respectively. The novices were essentially unable to hit the targets using the traditional approach, but did not miss using the novel system. The hit rate of experts improved from 59% (13/22) to 100% (22/22) (P < 0.0001). Conclusions: The novel magnetic 3-D laparoscopic ultrasound guidance results in perfect targeting of 5-mm lesions, even by surgical novices. PMID:21083797

  19. An efficient 3D R-tree spatial index method for virtual geographic environments

    NASA Astrophysics Data System (ADS)

    Zhu, Qing; Gong, Jun; Zhang, Yeting

    A three-dimensional (3D) spatial index is required for real time applications of integrated organization and management in virtual geographic environments of above ground, underground, indoor and outdoor objects. Being one of the most promising methods, the R-tree spatial index has been paid increasing attention in 3D geospatial database management. Since the existing R-tree methods are usually limited by their weakness of low efficiency, due to the critical overlap of sibling nodes and the uneven size of nodes, this paper introduces the k-means clustering method and employs the 3D overlap volume, 3D coverage volume and the minimum bounding box shape value of nodes as the integrative grouping criteria. A new spatial cluster grouping algorithm and R-tree insertion algorithm is then proposed. Experimental analysis on comparative performance of spatial indexing shows that by the new method the overlap of R-tree sibling nodes is minimized drastically and a balance in the volumes of the nodes is maintained.

  20. Streamlined, Inexpensive 3D Printing of the Brain and Skull

    PubMed Central

    Cash, Sydney S.

    2015-01-01

    Neuroimaging technologies such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) collect three-dimensional data (3D) that is typically viewed on two-dimensional (2D) screens. Actual 3D models, however, allow interaction with real objects such as implantable electrode grids, potentially improving patient specific neurosurgical planning and personalized clinical education. Desktop 3D printers can now produce relatively inexpensive, good quality prints. We describe our process for reliably generating life-sized 3D brain prints from MRIs and 3D skull prints from CTs. We have integrated a standardized, primarily open-source process for 3D printing brains and skulls. We describe how to convert clinical neuroimaging Digital Imaging and Communications in Medicine (DICOM) images to stereolithography (STL) files, a common 3D object file format that can be sent to 3D printing services. We additionally share how to convert these STL files to machine instruction gcode files, for reliable in-house printing on desktop, open-source 3D printers. We have successfully printed over 19 patient brain hemispheres from 7 patients on two different open-source desktop 3D printers. Each brain hemisphere costs approximately $3–4 in consumable plastic filament as described, and the total process takes 14–17 hours, almost all of which is unsupervised (preprocessing = 4–6 hr; printing = 9–11 hr, post-processing = <30 min). Printing a matching portion of a skull costs $1–5 in consumable plastic filament and takes less than 14 hr, in total. We have developed a streamlined, cost-effective process for 3D printing brain and skull models. We surveyed healthcare providers and patients who confirmed that rapid-prototype patient specific 3D models may help interdisciplinary surgical planning and patient education. The methods we describe can be applied for other clinical, research, and educational purposes. PMID:26295459

  1. Full-color high-definition CGH reconstructing hybrid scenes of physical and virtual objects

    NASA Astrophysics Data System (ADS)

    Tsuchiyama, Yasuhiro; Matsushima, Kyoji; Nakahara, Sumio; Yamaguchi, Masahiro; Sakamoto, Yuji

    2017-03-01

    High-definition CGHs can reconstruct high-quality 3D images that are comparable to that in conventional optical holography. However, it was difficult to exhibit full-color images reconstructed by these high-definition CGHs, because three CGHs for RGB colors and a bulky image combiner were needed to produce full-color images. Recently, we reported a novel technique for full-color reconstruction using RGB color filters, which are similar to that used for liquid-crystal panels. This technique allows us to produce full-color high-definition CGHs composed of a single plate and place them on exhibition. By using the technique, we demonstrate full-color CGHs that reconstruct hybrid scenes comprised of real-existing physical objects and CG-modeled virtual objects in this paper. Here, the wave field of the physical object are obtained from dense multi-viewpoint images by employing the ray-sampling (RS) plane technique. In addition to the technique for full-color capturing and reconstruction of real object fields, the principle and simulation technique for full- color CGHs using RGB color filters are presented.

  2. Lidar-based door and stair detection from a mobile robot

    NASA Astrophysics Data System (ADS)

    Bansal, Mayank; Southall, Ben; Matei, Bogdan; Eledath, Jayan; Sawhney, Harpreet

    2010-04-01

    We present an on-the-move LIDAR-based object detection system for autonomous and semi-autonomous unmanned vehicle systems. In this paper we make several contributions: (i) we describe an algorithm for real-time detection of objects such as doors and stairs in indoor environments; (ii) we describe efficient data structures and algorithms for processing 3D point clouds acquired by laser scanners in a streaming manner, which minimize the memory copying and access. We show qualitative results demonstrating the effectiveness of our approach on runs in an indoor office environment.

  3. Language-driven anticipatory eye movements in virtual reality.

    PubMed

    Eichert, Nicole; Peeters, David; Hagoort, Peter

    2018-06-01

    Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.

  4. Three-dimensional contour edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Yizhou; Ong, Sim Heng; Kassim, Ashraf A.; Foong, Kelvin W. C.

    2000-06-01

    This paper presents a novel algorithm for automatically extracting 3D contour edges, which are points of maximum surface curvature in a surface range image. The 3D image data are represented as a surface polygon mesh. The algorithm transforms the range data, obtained by scanning a dental plaster cast, into a 2D gray scale image by linearly converting the z-value of each vertex to a gray value. The Canny operator is applied to the median-filtered image to obtain the edge pixels and their orientations. A vertex in the 3D object corresponding to the detected edge pixel and its neighbors in the direction of the edge gradient are further analyzed with respect to their n-curvatures to extract the real 3D contour edges. This algorithm provides a fast method of reducing and sorting the unwieldy data inherent in the surface mesh representation. It employs powerful 2D algorithms to extract features from the transformed 3D models and refers to the 3D model for further analysis of selected data. This approach substantially reduces the computational burden without losing accuracy. It is also easily extended to detect 3D landmarks and other geometrical features, thus making it applicable to a wide range of applications.

  5. Researching on Real 3d Modeling Constructed with the Oblique Photogrammetry and Terrestrial Photogrammetry

    NASA Astrophysics Data System (ADS)

    Han, Youmei; Jiao, Minglian; Shijuan

    2018-04-01

    With the rapid development of the oblique photogrammetry, many cities have built some real 3D model with this technology. Although it has the advantages of short period, high efficiency and good air angle effect, the near ground view angle of these real 3D models are not very good. With increasing development of smart cities, the requirements of reality, practicality and accuracy on real 3D models are becoming higher. How to produce and improve the real 3D models quickly has become one of the hot research directions of geospatial information. To meet this requirement In this paper, Combined with the characteristics of current oblique photogrammetry modeling and the terrestrial photogrammetry, we proposed a new technological process, which consists of close range sensor design, data acquisition and processing. The proposed method is being tested by using oblique photography images acquired. The results confirm the effectiveness of the proposed approach.

  6. High-speed image processing system and its micro-optics application

    NASA Astrophysics Data System (ADS)

    Ohba, Kohtaro; Ortega, Jesus C. P.; Tanikawa, Tamio; Tanie, Kazuo; Tajima, Kenji; Nagai, Hiroshi; Tsuji, Masataka; Yamada, Shigeru

    2003-07-01

    In this paper, a new application system with high speed photography, i.e. an observational system for the tele-micro-operation, has been proposed with a dynamic focusing system and a high-speed image processing system using the "Depth From Focus (DFF)" criteria. In micro operation, such as for the microsurgery, DNA operation and etc., the small depth of a focus on the microscope makes bad observation. For example, if the focus is on the object, the actuator cannot be seen with the microscope. On the other hand, if the focus is on the actuator, the object cannot be observed. In this sense, the "all-in-focus image," which holds the in-focused texture all over the image, is useful to observe the microenvironments on the microscope. It is also important to obtain the "depth map" which could show the 3D micro virtual environments in real-time to actuate the micro objects, intuitively. To realize the real-time micro operation with DFF criteria, which has to integrate several images to obtain "all-in-focus image" and "depth map," at least, the 240 frames par second based image capture and processing system should be required. At first, this paper briefly reviews the criteria of "depth from focus" to achieve the all-in-focus image and the 3D microenvironments' reconstruction, simultaneously. After discussing the problem in our past system, a new frame-rate system is constructed with the high-speed video camera and FPGA hardware with 240 frames par second. To apply this system in the real microscope, a new criterion "ghost filtering" technique to reconstruct the all-in-focus image is proposed. Finally, the micro observation shows the validity of this system.

  7. Verification of real sensor motion for a high-dynamic 3D measurement inspection system

    NASA Astrophysics Data System (ADS)

    Breitbarth, Andreas; Correns, Martin; Zimmermann, Manuel; Zhang, Chen; Rosenberger, Maik; Schambach, Jörg; Notni, Gunther

    2017-06-01

    Inline three-dimensional measurements are a growing part of optical inspection. Considering increasing production capacities and economic aspects, dynamic measurements under motion are inescapable. Using a sequence of different pattern, like it is generally done in fringe projection systems, relative movements of the measurement object with respect to the 3d sensor between the images of one pattern sequence have to be compensated. Based on the application of fully automated optical inspection of circuit boards at an assembly line, the knowledge of the relative speed of movement between the measurement object and the 3d sensor system should be used inside the algorithms of motion compensation. Optimally, this relative speed is constant over the whole measurement process and consists of only one motion direction to avoid sensor vibrations. The quantified evaluation of this two assumptions and the error impact on the 3d accuracy are content of the research project described by this paper. For our experiments we use a glass etalon with non-transparent circles and transmitted light. Focused on the circle borders, this is one of the most reliable methods to determine subpixel positions using a couple of searching rays. The intersection point of all rays characterize the center of each circle. Based on these circle centers determined with a precision of approximately 1=50 pixel, the motion vector between two images could be calculated and compared with the input motion vector. Overall, the results are used to optimize the weight distribution of the 3d sensor head and reduce non-uniformly vibrations. Finally, there exists a dynamic 3d measurement system with an error of motion vectors about 4 micrometer. Based on this outcome, simulations result in a 3d standard deviation at planar object regions of 6 micrometers. The same system yields a 3d standard deviation of 9 µm without the optimization of weight distribution.

  8. Graphical user interface concepts for tactical augmented reality

    NASA Astrophysics Data System (ADS)

    Argenta, Chris; Murphy, Anne; Hinton, Jeremy; Cook, James; Sherrill, Todd; Snarski, Steve

    2010-04-01

    Applied Research Associates and BAE Systems are working together to develop a wearable augmented reality system under the DARPA ULTRA-Vis program†. Our approach to achieve the objectives of ULTRAVis, called iLeader, incorporates a full color 40° field of view (FOV) see-thru holographic waveguide integrated with sensors for full position and head tracking to provide an unobtrusive information system for operational maneuvers. iLeader will enable warfighters to mark-up the 3D battle-space with symbologic identification of graphical control measures, friendly force positions and enemy/target locations. Our augmented reality display provides dynamic real-time painting of symbols on real objects, a pose-sensitive 360° representation of relevant object positions, and visual feedback for a variety of system activities. The iLeader user interface and situational awareness graphical representations are highly intuitive, nondisruptive, and always tactically relevant. We used best human-factors practices, system engineering expertise, and cognitive task analysis to design effective strategies for presenting real-time situational awareness to the military user without distorting their natural senses and perception. We present requirements identified for presenting information within a see-through display in combat environments, challenges in designing suitable visualization capabilities, and solutions that enable us to bring real-time iconic command and control to the tactical user community.

  9. Dense 3D Face Alignment from 2D Video for Real-Time Use

    PubMed Central

    Jeni, László A.; Cohn, Jeffrey F.; Kanade, Takeo

    2018-01-01

    To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person’s face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of landmarks and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction, extension to multi-view reconstruction, temporal integration for videos and 3D head-pose estimation. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org. PMID:29731533

  10. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands

    PubMed Central

    Mateo, Carlos M.; Gil, Pablo; Torres, Fernando

    2016-01-01

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments. PMID:27164102

  11. Augmented Reality Technology Using Microsoft HoloLens in Anatomic Pathology.

    PubMed

    Hanna, Matthew G; Ahmed, Ishtiaque; Nine, Jeffrey; Prajapati, Shyam; Pantanowitz, Liron

    2018-05-01

    Context Augmented reality (AR) devices such as the Microsoft HoloLens have not been well used in the medical field. Objective To test the HoloLens for clinical and nonclinical applications in pathology. Design A Microsoft HoloLens was tested for virtual annotation during autopsy, viewing 3D gross and microscopic pathology specimens, navigating whole slide images, telepathology, as well as real-time pathology-radiology correlation. Results Pathology residents performing an autopsy wearing the HoloLens were remotely instructed with real-time diagrams, annotations, and voice instruction. 3D-scanned gross pathology specimens could be viewed as holograms and easily manipulated. Telepathology was supported during gross examination and at the time of intraoperative consultation, allowing users to remotely access a pathologist for guidance and to virtually annotate areas of interest on specimens in real-time. The HoloLens permitted radiographs to be coregistered on gross specimens and thereby enhanced locating important pathologic findings. The HoloLens also allowed easy viewing and navigation of whole slide images, using an AR workstation, including multiple coregistered tissue sections facilitating volumetric pathology evaluation. Conclusions The HoloLens is a novel AR tool with multiple clinical and nonclinical applications in pathology. The device was comfortable to wear, easy to use, provided sufficient computing power, and supported high-resolution imaging. It was useful for autopsy, gross and microscopic examination, and ideally suited for digital pathology. Unique applications include remote supervision and annotation, 3D image viewing and manipulation, telepathology in a mixed-reality environment, and real-time pathology-radiology correlation.

  12. High-speed 3D surface measurement with a fringe projection based optical sensor

    NASA Astrophysics Data System (ADS)

    Bräuer-Burchardt, Christian; Heist, Stefan; Kühmstedt, Peter; Notni, Gunther

    2014-05-01

    A new optical sensor based on fringe projection technique for the accurate and fast measurement of the surface of objects mainly for industrial inspection tasks is introduced. High-speed fringe projection and image recording with 180 Hz allows 3D rates up to 60 Hz. The high measurement velocity was achieved by consequent fringe code reduction and parallel data processing. Reduction of the image sequence length was obtained by omission of the Gray-code sequence by using the geometric restrictions of the measurement objects. The sensor realizes three different measurement fields between 20 x 20 mm2 and 40 x 40 mm2 with lateral spatial solutions between 10 μm and 20 μm with the same working distance. Measurement object height extension is between +/- 0.5 mm and +/- 2 mm. Height resolution between 1 μm and 5 μm can be achieved depending on the properties of the measurement objects. The sensor may be used e.g. for quality inspection of conductor boards or plugs in real-time industrial applications.

  13. Field lens multiplexing in holographic 3D displays by using Bragg diffraction based volume gratings

    NASA Astrophysics Data System (ADS)

    Fütterer, G.

    2016-11-01

    Applications, which can profit from holographic 3D displays, are the visualization of 3D data, computer-integrated manufacturing, 3D teleconferencing and mobile infotainment. However, one problem of holographic 3D displays, which are e.g. based on space bandwidth limited reconstruction of wave segments, is to realize a small form factor. Another problem is to provide a reasonable large volume for the user placement, which means to provide an acceptable freedom of movement. Both problems should be solved without decreasing the image quality of virtual and real object points, which are generated within the 3D display volume. A diffractive optical design using thick hologram gratings, which can be referred to as Bragg diffraction based volume gratings, can provide a small form factor and high definition natural viewing experience of 3D objects. A large collimated wave can be provided by an anamorphic backlight unit. The complex valued spatial light modulator add local curvatures to the wave field he is illuminated with. The modulated wave field is focused onto to the user plane by using a volume grating based field lens. Active type liquid crystal gratings provide 1D fine tracking of approximately +/- 8° deg. Diffractive multiplex has to be implemented for each color and for a set of focus functions providing coarse tracking. Boundary conditions of the diffractive multiplexing are explained. This is done in regards to the display layout and by using the coupled wave theory (CWT). Aspects of diffractive cross talk and its suppression will be discussed including longitudinal apodized volume gratings.

  14. A topological framework for interactive queries on 3D models in the Web.

    PubMed

    Figueiredo, Mauro; Rodrigues, José I; Silvestre, Ivo; Veiga-Pires, Cristina

    2014-01-01

    Several technologies exist to create 3D content for the web. With X3D, WebGL, and X3DOM, it is possible to visualize and interact with 3D models in a web browser. Frequently, three-dimensional objects are stored using the X3D file format for the web. However, there is no explicit topological information, which makes it difficult to design fast algorithms for applications that require adjacency and incidence data. This paper presents a new open source toolkit TopTri (Topological model for Triangle meshes) for Web3D servers that builds the topological model for triangular meshes of manifold or nonmanifold models. Web3D client applications using this toolkit make queries to the web server to get adjacent and incidence information of vertices, edges, and faces. This paper shows the application of the topological information to get minimal local points and iso-lines in a 3D mesh in a web browser. As an application, we present also the interactive identification of stalactites in a cave chamber in a 3D web browser. Several tests show that even for large triangular meshes with millions of triangles, the adjacency and incidence information is returned in real time making the presented toolkit appropriate for interactive Web3D applications.

  15. A Topological Framework for Interactive Queries on 3D Models in the Web

    PubMed Central

    Figueiredo, Mauro; Rodrigues, José I.; Silvestre, Ivo; Veiga-Pires, Cristina

    2014-01-01

    Several technologies exist to create 3D content for the web. With X3D, WebGL, and X3DOM, it is possible to visualize and interact with 3D models in a web browser. Frequently, three-dimensional objects are stored using the X3D file format for the web. However, there is no explicit topological information, which makes it difficult to design fast algorithms for applications that require adjacency and incidence data. This paper presents a new open source toolkit TopTri (Topological model for Triangle meshes) for Web3D servers that builds the topological model for triangular meshes of manifold or nonmanifold models. Web3D client applications using this toolkit make queries to the web server to get adjacent and incidence information of vertices, edges, and faces. This paper shows the application of the topological information to get minimal local points and iso-lines in a 3D mesh in a web browser. As an application, we present also the interactive identification of stalactites in a cave chamber in a 3D web browser. Several tests show that even for large triangular meshes with millions of triangles, the adjacency and incidence information is returned in real time making the presented toolkit appropriate for interactive Web3D applications. PMID:24977236

  16. Three-dimensional imaging from a unidirectional hologram: wide-viewing-zone projection type.

    PubMed

    Okoshi, T; Oshima, K

    1976-04-01

    In ordinary holography reconstructing a virtual image, the hologram must be wider than either the visual field or the viewing zone. In this paper, an economical method of recording a wide-viewing-zone wide-visual-field 3-D holographic image is proposed. In this method, many mirrors are used to collect object waves onto a small hologram. In the reconstruction, a real image from the hologram is projected onto a horizontally direction-selective stereoscreen through the same mirrors. In the experiment, satisfactory 3-D images have been observed from a wide viewing zone. The optimum design and information reduction techniques are also discussed.

  17. 3D skin surface reconstruction from a single image by merging global curvature and local texture using the guided filtering for 3D haptic palpation.

    PubMed

    Lee, K; Kim, M; Kim, K

    2018-05-11

    Skin surface evaluation has been studied using various imaging techniques. However, all these studies had limited impact because they were performed using visual exam only. To improve on this scenario with haptic feedback, we propose 3D reconstruction of the skin surface using a single image. Unlike extant 3D skin surface reconstruction algorithms, we utilize the local texture and global curvature regions, combining the results for reconstruction. The first entails the reconstruction of global curvature, achieved by bilateral filtering that removes noise on the surface while maintaining the edge (ie, furrow) to obtain the overall curvature. The second entails the reconstruction of local texture, representing the fine wrinkles of the skin, using an advanced form of bilateral filtering. The final image is then composed by merging the two reconstructed images. We tested the curvature reconstruction part by comparing the resulting curvatures with measured values from real phantom objects while local texture reconstruction was verified by measuring skin surface roughness. Then, we showed the reconstructed result of our proposed algorithm via the reconstruction of various real skin surfaces. The experimental results demonstrate that our approach is a promising technology to reconstruct an accurate skin surface with a single skin image. We proposed 3D skin surface reconstruction using only a single camera. We highlighted the utility of global curvature, which has not been considered important in the past. Thus, we proposed a new method for 3D reconstruction that can be used for 3D haptic palpation, dividing the concepts of local and global regions. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Towards the automatic scanning of indoors with robots.

    PubMed

    Adán, Antonio; Quintana, Blanca; Vázquez, Andres S; Olivares, Alberto; Parra, Eduardo; Prieto, Samuel

    2015-05-19

    This paper is framed in both 3D digitization and 3D data intelligent processing research fields. Our objective is focused on developing a set of techniques for the automatic creation of simple three-dimensional indoor models with mobile robots. The document presents the principal steps of the process, the experimental setup and the results achieved. We distinguish between the stages concerning intelligent data acquisition and 3D data processing. This paper is focused on the first stage. We show how the mobile robot, which carries a 3D scanner, is able to, on the one hand, make decisions about the next best scanner position and, on the other hand, navigate autonomously in the scene with the help of the data collected from earlier scans. After this stage, millions of 3D data are converted into a simplified 3D indoor model. The robot imposes a stopping criterion when the whole point cloud covers the essential parts of the scene. This system has been tested under real conditions indoors with promising results. The future is addressed to extend the method in much more complex and larger scenarios.

  19. Towards the Automatic Scanning of Indoors with Robots

    PubMed Central

    Adán, Antonio; Quintana, Blanca; Vázquez, Andres S.; Olivares, Alberto; Parra, Eduardo; Prieto, Samuel

    2015-01-01

    This paper is framed in both 3D digitization and 3D data intelligent processing research fields. Our objective is focused on developing a set of techniques for the automatic creation of simple three-dimensional indoor models with mobile robots. The document presents the principal steps of the process, the experimental setup and the results achieved. We distinguish between the stages concerning intelligent data acquisition and 3D data processing. This paper is focused on the first stage. We show how the mobile robot, which carries a 3D scanner, is able to, on the one hand, make decisions about the next best scanner position and, on the other hand, navigate autonomously in the scene with the help of the data collected from earlier scans. After this stage, millions of 3D data are converted into a simplified 3D indoor model. The robot imposes a stopping criterion when the whole point cloud covers the essential parts of the scene. This system has been tested under real conditions indoors with promising results. The future is addressed to extend the method in much more complex and larger scenarios. PMID:25996513

  20. 3D Printing of Biomolecular Models for Research and Pedagogy

    PubMed Central

    Da Veiga Beltrame, Eduardo; Tyrwhitt-Drake, James; Roy, Ian; Shalaby, Raed; Suckale, Jakob; Pomeranz Krummel, Daniel

    2017-01-01

    The construction of physical three-dimensional (3D) models of biomolecules can uniquely contribute to the study of the structure-function relationship. 3D structures are most often perceived using the two-dimensional and exclusively visual medium of the computer screen. Converting digital 3D molecular data into real objects enables information to be perceived through an expanded range of human senses, including direct stereoscopic vision, touch, and interaction. Such tangible models facilitate new insights, enable hypothesis testing, and serve as psychological or sensory anchors for conceptual information about the functions of biomolecules. Recent advances in consumer 3D printing technology enable, for the first time, the cost-effective fabrication of high-quality and scientifically accurate models of biomolecules in a variety of molecular representations. However, the optimization of the virtual model and its printing parameters is difficult and time consuming without detailed guidance. Here, we provide a guide on the digital design and physical fabrication of biomolecule models for research and pedagogy using open source or low-cost software and low-cost 3D printers that use fused filament fabrication technology. PMID:28362403

  1. Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research

    NASA Astrophysics Data System (ADS)

    Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.

    2008-12-01

    In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.

  2. 3D Visible-Light Invisibility Cloak.

    PubMed

    Zheng, Bin; Zhu, Rongrong; Jing, Liqiao; Yang, Yihao; Shen, Lian; Wang, Huaping; Wang, Zuojia; Zhang, Xianmin; Liu, Xu; Li, Erping; Chen, Hongsheng

    2018-06-01

    The concept of an invisibility cloak is a fixture of science fiction, fantasy, and the collective imagination. However, a real device that can hide an object from sight in visible light from absolutely any viewpoint would be extremely challenging to build. The main obstacle to creating such a cloak is the coupling of the electromagnetic components of light, which would necessitate the use of complex materials with specific permittivity and permeability tensors. Previous cloaking solutions have involved circumventing this obstacle by functioning either in static (or quasistatic) fields where these electromagnetic components are uncoupled or in diffusive light scattering media where complex materials are not required. In this paper, concealing a large-scale spherical object from human sight from three orthogonal directions is reported. This result is achieved by developing a 3D homogeneous polyhedral transformation and a spatially invariant refractive index discretization that considerably reduce the coupling of the electromagnetic components of visible light. This approach allows for a major simplification in the design of 3D invisibility cloaks, which can now be created at a large scale using homogeneous and isotropic materials.

  3. A Review on Real-Time 3D Ultrasound Imaging Technology

    PubMed Central

    Zeng, Zhaozheng

    2017-01-01

    Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail. PMID:28459067

  4. A Review on Real-Time 3D Ultrasound Imaging Technology.

    PubMed

    Huang, Qinghua; Zeng, Zhaozheng

    2017-01-01

    Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail.

  5. Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.

    PubMed

    De Queiroz, Ricardo; Chou, Philip A

    2016-06-01

    In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.

  6. Three-dimensional model-based object recognition and segmentation in cluttered scenes.

    PubMed

    Mian, Ajmal S; Bennamoun, Mohammed; Owens, Robyn

    2006-10-01

    Viewpoint independent recognition of free-form objects and their segmentation in the presence of clutter and occlusions is a challenging task. We present a novel 3D model-based algorithm which performs this task automatically and efficiently. A 3D model of an object is automatically constructed offline from its multiple unordered range images (views). These views are converted into multidimensional table representations (which we refer to as tensors). Correspondences are automatically established between these views by simultaneously matching the tensors of a view with those of the remaining views using a hash table-based voting scheme. This results in a graph of relative transformations used to register the views before they are integrated into a seamless 3D model. These models and their tensor representations constitute the model library. During online recognition, a tensor from the scene is simultaneously matched with those in the library by casting votes. Similarity measures are calculated for the model tensors which receive the most votes. The model with the highest similarity is transformed to the scene and, if it aligns accurately with an object in the scene, that object is declared as recognized and is segmented. This process is repeated until the scene is completely segmented. Experiments were performed on real and synthetic data comprised of 55 models and 610 scenes and an overall recognition rate of 95 percent was achieved. Comparison with the spin images revealed that our algorithm is superior in terms of recognition rate and efficiency.

  7. Principal axes estimation using the vibration modes of physics-based deformable models.

    PubMed

    Krinidis, Stelios; Chatzis, Vassilios

    2008-06-01

    This paper addresses the issue of accurate, effective, computationally efficient, fast, and fully automated 2-D object orientation and scaling factor estimation. The object orientation is calculated using object principal axes estimation. The approach relies on the object's frequency-based features. The frequency-based features used by the proposed technique are extracted by a 2-D physics-based deformable model that parameterizes the objects shape. The method was evaluated on synthetic and real images. The experimental results demonstrate the accuracy of the method, both in orientation and the scaling estimations.

  8. [Overexpression of miR-519d-3p inhibits the proliferation of DU-145 prostate cancer cells by reducing TRAF4].

    PubMed

    Li, Xiaohui; Han, Xingtao; Yang, Jinhui; Sun, Jiantao; Wei, Pengtao

    2018-01-01

    Objective To observe the effect of microRNA-519d-3p (miR-519d-3p) on the proliferation of prostate cancer cells and explore the possible molecular mechanism. Methods The expression level of miR-519d-3p in PC-3, DU-145, 22RV1, PC-3M, LNCaP human prostate cancer cells and RWPE-1 human normal prostate epithelial cells was detected by real-time quantitative PCR. miR-519d-3p mimics or negative control microRNAs (miR-NC) was transfected into the prostate cancer cells with the lowest level of miR-519d-3p expression. Transfection efficiency was examined. The effect of miR-519d-3p on the cell cycle of prostate cancer was detected by flow cytometry. MTT assay and plate clone formation assay were used to detect its effect on the proliferation of prostate cancer cells. Bioinformatics software was used to predict and dual luciferase reporter assay was used to validate the target gene of miR-519d-3p. Real-time quantitative PCR was used to detect the expression of miR-519d-3p target gene. Western blot analysis was used to detect the expression of target gene protein and downstream protein. Results The expression of miR-519d-3p in normal prostate epithelial cells was significantly higher than that in prostate cancer cells, and the lowest was found in DU-145 cells. After transfected with miR-519d-3p mimics, the expression level of miR-519d-3p in DU-145 cells increased significantly. Bioinformatics prediction and dual luciferase reporter gene confirmed that tumor necrosis factor receptor associated factor 4 (TRAF4) was the target gene of miR-519d-3p. Overexpression of miR-519d-3p significantly reduced the expression of TRAF4 gene and its downstream TGF-β signaling pathway proteins in the prostate cancer cells. Conclusion The expression of miR-519d-3p is down-regulated in prostate cancer cells. Overexpression of miR-519d-3p can inhibit the proliferation of prostate cancer cells. The possible mechanism is that miR-519d-3p inhibits the expression of TRAF4.

  9. From Wheatstone to Cameron and beyond: overview in 3-D and 4-D imaging technology

    NASA Astrophysics Data System (ADS)

    Gilbreath, G. Charmaine

    2012-02-01

    This paper reviews three-dimensional (3-D) and four-dimensional (4-D) imaging technology, from Wheatstone through today, with some prognostications for near future applications. This field is rich in variety, subject specialty, and applications. A major trend, multi-view stereoscopy, is moving the field forward to real-time wide-angle 3-D reconstruction as breakthroughs in parallel processing and multi-processor computers enable very fast processing. Real-time holography meets 4-D imaging reconstruction at the goal of achieving real-time, interactive, 3-D imaging. Applications to telesurgery and telemedicine as well as to the needs of the defense and intelligence communities are also discussed.

  10. Physically-Based Modelling and Real-Time Simulation of Fluids.

    NASA Astrophysics Data System (ADS)

    Chen, Jim Xiong

    1995-01-01

    Simulating physically realistic complex fluid behaviors presents an extremely challenging problem for computer graphics researchers. Such behaviors include the effects of driving boats through water, blending differently colored fluids, rain falling and flowing on a terrain, fluids interacting in a Distributed Interactive Simulation (DIS), etc. Such capabilities are useful in computer art, advertising, education, entertainment, and training. We present a new method for physically-based modeling and real-time simulation of fluids in computer graphics and dynamic virtual environments. By solving the 2D Navier -Stokes equations using a CFD method, we map the surface into 3D using the corresponding pressures in the fluid flow field. This achieves realistic real-time fluid surface behaviors by employing the physical governing laws of fluids but avoiding extensive 3D fluid dynamics computations. To complement the surface behaviors, we calculate fluid volume and external boundary changes separately to achieve full 3D general fluid flow. To simulate physical activities in a DIS, we introduce a mechanism which uses a uniform time scale proportional to the clock-time and variable time-slicing to synchronize physical models such as fluids in the networked environment. Our approach can simulate many different fluid behaviors by changing the internal or external boundary conditions. It can model different kinds of fluids by varying the Reynolds number. It can simulate objects moving or floating in fluids. It can also produce synchronized general fluid flows in a DIS. Our model can serve as a testbed to simulate many other fluid phenomena which have never been successfully modeled previously.

  11. Real-Time Upper-Ocean Temperature Observations from Aircraft during Operational Hurricane Reconnaissance Missions: AXBT Demonstration Project Year One Results

    DTIC Science & Technology

    2013-12-01

    Demonstration Project beginning in the 2011 North Atlantic hurricane season (WG/HWSOR 2011). The primary objectives of the first year of the demon- stration...after Atlantic hurricanes from WP-3D hur- ricane research flights conducted jointly by the NOAA AircraftOperationsCenter (AOC), theNOAA/Hurricane... Atlantic hurricane season; 3) to present an initial set of results from the inclusion of AXBT data in both statistical and dynamical numerical prediction

  12. Real-time dynamic display of registered 4D cardiac MR and ultrasound images using a GPU

    NASA Astrophysics Data System (ADS)

    Zhang, Q.; Huang, X.; Eagleson, R.; Guiraudon, G.; Peters, T. M.

    2007-03-01

    In minimally invasive image-guided surgical interventions, different imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and real-time three-dimensional (3D) ultrasound (US), can provide complementary, multi-spectral image information. Multimodality dynamic image registration is a well-established approach that permits real-time diagnostic information to be enhanced by placing lower-quality real-time images within a high quality anatomical context. For the guidance of cardiac procedures, it would be valuable to register dynamic MRI or CT with intraoperative US. However, in practice, either the high computational cost prohibits such real-time visualization of volumetric multimodal images in a real-world medical environment, or else the resulting image quality is not satisfactory for accurate guidance during the intervention. Modern graphics processing units (GPUs) provide the programmability, parallelism and increased computational precision to begin to address this problem. In this work, we first outline our research on dynamic 3D cardiac MR and US image acquisition, real-time dual-modality registration and US tracking. Then we describe image processing and optimization techniques for 4D (3D + time) cardiac image real-time rendering. We also present our multimodality 4D medical image visualization engine, which directly runs on a GPU in real-time by exploiting the advantages of the graphics hardware. In addition, techniques such as multiple transfer functions for different imaging modalities, dynamic texture binding, advanced texture sampling and multimodality image compositing are employed to facilitate the real-time display and manipulation of the registered dual-modality dynamic 3D MR and US cardiac datasets.

  13. Modeling job sites in real time to improve safety during equipment operation

    NASA Astrophysics Data System (ADS)

    Caldas, Carlos H.; Haas, Carl T.; Liapi, Katherine A.; Teizer, Jochen

    2006-03-01

    Real-time three-dimensional (3D) modeling of work zones has received an increasing interest to perform equipment operation faster, safer and more precisely. In addition, hazardous job site environment like they exist on construction sites ask for new devices which can rapidly and actively model static and dynamic objects. Flash LADAR (Laser Detection and Ranging) cameras are one of the recent technology developments which allow rapid spatial data acquisition of scenes. Algorithms that can process and interpret the output of such enabling technologies into threedimensional models have the potential to significantly improve work processes. One particular important application is modeling the location and path of objects in the trajectory of heavy construction equipment navigation. Detecting and mapping people, materials and equipment into a three-dimensional computer model allows analyzing the location, path, and can limit or restrict access to hazardous areas. This paper presents experiments and results of a real-time three-dimensional modeling technique to detect static and moving objects within the field of view of a high-frame update rate laser range scanning device. Applications related to heavy equipment operations on transportation and construction job sites are specified.

  14. Aerosol Emissions from Fuse-Deposition Modeling 3D Printers in a Chamber and in Real Indoor Environments.

    PubMed

    Vance, Marina E; Pegues, Valerie; Van Montfrans, Schuyler; Leng, Weinan; Marr, Linsey C

    2017-09-05

    Three-dimensional (3D) printers are known to emit aerosols, but questions remain about their composition and the fundamental processes driving emissions. The objective of this work was to characterize the aerosol emissions from the operation of a fuse-deposition modeling 3D printer. We modeled the time- and size-resolved emissions of submicrometer aerosols from the printer in a chamber study, gained insight into the chemical composition of emitted aerosols using Raman spectroscopy, and measured the potential for exposure to the aerosols generated by 3D printers under real-use conditions in a variety of indoor environments. The average aerosol emission rates ranged from ∼10 8 to ∼10 11 particles min -1 , and the rates varied over the course of a print job. Acrylonitrile butadiene styrene (ABS) filaments generated the largest number of aerosols, and wood-infused polylactic acid (PLA) filaments generated the smallest amount. The emission factors ranged from 6 × 10 8 to 6 × 10 11 per gram of printed part, depending on the type of filament used. For ABS, the Raman spectra of the filament and the printed part were indistinguishable, while the aerosol spectra lacked important peaks corresponding to styrene and acrylonitrile, which are both present in ABS. This observation suggests that aerosols are not a result of volatilization and subsequent nucleation of ABS or direct release of ABS aerosols.

  15. Laser scanning stereomicroscopy for fast volumetric imaging with two-photon excitation and scanned Bessel beams

    NASA Astrophysics Data System (ADS)

    Yang, Yanlong; Zhou, Xing; Li, Runze; Van Horn, Mark; Peng, Tong; Lei, Ming; Wu, Di; Chen, Xun; Yao, Baoli; Ye, Tong

    2015-03-01

    Bessel beams have been used in many applications due to their unique optical properties of maintaining their intensity profiles unchanged during propagation. In imaging applications, Bessel beams have been successfully used to provide extended focuses for volumetric imaging and uniformed illumination plane in light-sheet microscopy. Coupled with two-photon excitation, Bessel beams have been successfully used in realizing fluorescence projected volumetric imaging. We demonstrated previously a stereoscopic solution-two-photon fluorescence stereomicroscopy (TPFSM)-for recovering the depth information in volumetric imaging with Bessel beams. In TPFSM, tilted Bessel beams were used to generate stereoscopic images on a laser scanning two-photon fluorescence microscope; upon post image processing we could successfully provide 3D perception of acquired volume images by wearing anaglyph 3D glasses. However, tilted Bessel beams were generated by shifting either an axicon or an objective laterally; the slow imaging speed and severe aberrations made it hard to use in real-time volume imaging. In this article, we report recent improvements of TPFSM with newly designed scanner and imaging software, which allows 3D stereoscopic imaging without moving any of the optical components on the setup. This improvement has dramatically improved focusing qualities and imaging speed so that the TPFSM can be performed potentially in real-time to provide 3D visualization in scattering media without post image processing.

  16. Memory color of natural familiar objects: effects of surface texture and 3-D shape.

    PubMed

    Vurro, Milena; Ling, Yazhu; Hurlbert, Anya C

    2013-06-28

    Natural objects typically possess characteristic contours, chromatic surface textures, and three-dimensional shapes. These diagnostic features aid object recognition, as does memory color, the color most associated in memory with a particular object. Here we aim to determine whether polychromatic surface texture, 3-D shape, and contour diagnosticity improve memory color for familiar objects, separately and in combination. We use solid three-dimensional familiar objects rendered with their natural texture, which participants adjust in real time to match their memory color for the object. We analyze mean, accuracy, and precision of the memory color settings relative to the natural color of the objects under the same conditions. We find that in all conditions, memory colors deviate slightly but significantly in the same direction from the natural color. Surface polychromaticity, shape diagnosticity, and three dimensionality each improve memory color accuracy, relative to uniformly colored, generic, or two-dimensional shapes, respectively. Shape diagnosticity improves the precision of memory color also, and there is a trend for polychromaticity to do so as well. Differently from other studies, we find that the object contour alone also improves memory color. Thus, enhancing the naturalness of the stimulus, in terms of either surface or shape properties, enhances the accuracy and precision of memory color. The results support the hypothesis that memory color representations are polychromatic and are synergistically linked with diagnostic shape representations.

  17. Method of Individual Adjustment for 3D CT Analysis: Linear Measurement.

    PubMed

    Kim, Dong Kyu; Choi, Dong Hun; Lee, Jeong Woo; Yang, Jung Dug; Chung, Ho Yun; Cho, Byung Chae; Choi, Kang Young

    2016-01-01

    Introduction . We aim to regularize measurement values in three-dimensional (3D) computed tomography (CT) reconstructed images for higher-precision 3D analysis, focusing on length-based 3D cephalometric examinations. Methods . We measure the linear distances between points on different skull models using Vernier calipers (real values). We use 10 differently tilted CT scans for 3D CT reconstruction of the models and measure the same linear distances from the picture archiving and communication system (PACS). In both cases, each measurement is performed three times by three doctors, yielding nine measurements. The real values are compared with the PACS values. Each PACS measurement is revised based on the display field of view (DFOV) values and compared with the real values. Results . The real values and the PACS measurement changes according to tilt value have no significant correlations ( p > 0.05). However, significant correlations appear between the real values and DFOV-adjusted PACS measurements ( p < 0.001). Hence, we obtain a correlation expression that can yield real physical values from PACS measurements. The DFOV value intervals for various age groups are also verified. Conclusion . Precise confirmation of individual preoperative length and precise analysis of postoperative improvements through 3D analysis is possible, which is helpful for facial-bone-surgery symmetry correction.

  18. Using a 3D Virtual Supermarket to Measure Food Purchase Behavior: A Validation Study

    PubMed Central

    Jiang, Yannan; Steenhuis, Ingrid Hendrika Margaretha; Ni Mhurchu, Cliona

    2015-01-01

    Background There is increasing recognition that supermarkets are an important environment for health-promoting interventions such as fiscal food policies or front-of-pack nutrition labeling. However, due to the complexities of undertaking such research in the real world, well-designed randomized controlled trials on these kinds of interventions are lacking. The Virtual Supermarket is a 3-dimensional computerized research environment designed to enable experimental studies in a supermarket setting without the complexity or costs normally associated with undertaking such research. Objective The primary objective was to validate the Virtual Supermarket by comparing virtual and real-life food purchasing behavior. A secondary objective was to obtain participant feedback on perceived sense of “presence” (the subjective experience of being in one place or environment even if physically located in another) in the Virtual Supermarket. Methods Eligible main household shoppers (New Zealand adults aged ≥18 years) were asked to conduct 3 shopping occasions in the Virtual Supermarket over 3 consecutive weeks, complete the validated Presence Questionnaire Items Stems, and collect their real supermarket grocery till receipts for that same period. Proportional expenditure (NZ$) and the proportion of products purchased over 18 major food groups were compared between the virtual and real supermarkets. Data were analyzed using repeated measures mixed models. Results A total of 123 participants consented to take part in the study. In total, 69.9% (86/123) completed 1 shop in the Virtual Supermarket, 64.2% (79/123) completed 2 shops, 60.2% (74/123) completed 3 shops, and 48.8% (60/123) returned their real supermarket till receipts. The 4 food groups with the highest relative expenditures were the same for the virtual and real supermarkets: fresh fruit and vegetables (virtual estimate: 14.3%; real: 17.4%), bread and bakery (virtual: 10.0%; real: 8.2%), dairy (virtual: 19.1%; real: 12.6%), and meat and fish (virtual: 16.5%; real: 16.8%). Significant differences in proportional expenditures were observed for 6 food groups, with largest differences (virtual – real) for dairy (in expenditure 6.5%, P<.001; in items 2.2%, P=.04) and fresh fruit and vegetables (in expenditure: –3.1%, P=.04; in items: 5.9%, P=.002). There was no trend of overspending in the Virtual Supermarket and participants experienced a medium-to-high presence (88%, 73/83 scored medium; 8%, 7/83 scored high). Conclusions Shopping patterns in the Virtual Supermarket were comparable to those in real life. Overall, the Virtual Supermarket is a valid tool to measure food purchasing behavior. Nevertheless, it is important to improve the functionality of some food categories, in particular fruit and vegetables and dairy. The results of this validation will assist in making further improvements to the software and with optimization of the internal and external validity of this innovative methodology. PMID:25921185

  19. An anti-disturbing real time pose estimation method and system

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Zhang, Xiao-hu

    2011-08-01

    Pose estimation relating two-dimensional (2D) images to three-dimensional (3D) rigid object need some known features to track. In practice, there are many algorithms which perform this task in high accuracy, but all of these algorithms suffer from features lost. This paper investigated the pose estimation when numbers of known features or even all of them were invisible. Firstly, known features were tracked to calculate pose in the current and the next image. Secondly, some unknown but good features to track were automatically detected in the current and the next image. Thirdly, those unknown features which were on the rigid and could match each other in the two images were retained. Because of the motion characteristic of the rigid object, the 3D information of those unknown features on the rigid could be solved by the rigid object's pose at the two moment and their 2D information in the two images except only two case: the first one was that both camera and object have no relative motion and camera parameter such as focus length, principle point, and etc. have no change at the two moment; the second one was that there was no shared scene or no matched feature in the two image. Finally, because those unknown features at the first time were known now, pose estimation could go on in the followed images in spite of the missing of known features in the beginning by repeating the process mentioned above. The robustness of pose estimation by different features detection algorithms such as Kanade-Lucas-Tomasi (KLT) feature, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Feature (SURF) were compared and the compact of the different relative motion between camera and the rigid object were discussed in this paper. Graphic Processing Unit (GPU) parallel computing was also used to extract and to match hundreds of features for real time pose estimation which was hard to work on Central Processing Unit (CPU). Compared with other pose estimation methods, this new method can estimate pose between camera and object when part even all known features are lost, and has a quick response time benefit from GPU parallel computing. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in autonomous navigation and positioning, robots fields at unknown environment. The results of simulation and experiments demonstrate that proposed method could suppress noise effectively, extracted features robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.

  20. Precipitation Processes Developed During ARM (1997), TOGA COARE (1992) GATE (1974), SCSMEX (1998), and KWAJEX (1999): Consistent 3D, Semi-3D and 3D Cloud Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.; Hou, A.; Atlas, R.; Starr, D.; Sud, Y.

    2003-01-01

    Real clouds and cloud systems are inherently three-dimensional (3D). Because of the limitations in computer resources, however, most cloud-resolving models (CRMs) today are still two-dimensional (2D) have been used to study the response of clouds to large-scale forcing. IN these 3D simulators, the model domain was small, and the integration time was 6 hours. Only recently have 3D experiments been performed for multi-day periods for tropical clouds systems with large horizontal domains at the National Center of Atmospheric Research (NCAR) and at NASA Goddard Space Center. At Goddard, a 3D cumulus Ensemble (GCE) model was used to simulate periods during TOGA COARE, GATE, SCSMEX, ARM, and KWAJEX using a 512 by 512 km domain (with 2-km resolution). The result indicate that surface precipitation and latent heating profiles are very similar between the 2D and 3D GCE model simulation. The major objective of this paper are: (1) to assess the performance of the super-parametrization technique, (2) calculate and examine the surface energy (especially radiation) and water budget, and (3) identify the differences and similarities in the organization and entrainment rates of convection between simulated 2D and 3D cloud systems.

  1. Modern Methods for fast generation of digital holograms

    NASA Astrophysics Data System (ADS)

    Tsang, P. W. M.; Liu, J. P.; Cheung, K. W. K.; Poon, T.-C.

    2010-06-01

    With the advancement of computers, digital holography (DH) has become an area of interest that has gained much popularity. Research findings derived from this technology enables holograms representing three dimensional (3-D) scenes to be acquired with optical means, or generated with numerical computation. In both cases, the holograms are in the form of numerical data that can be recorded, transmitted, and processed with digital techniques. On top of that, the availability of high capacity digital storage and wide-band communication technologies also cast light on the emergence of real time video holographic systems, enabling animated 3-D contents to be encoded as holographic data, and distributed via existing medium. At present, development in DH has reached a reasonable degree of maturity, but at the same time the heavy computation involved also imposes difficulty in practical applications. In this paper, a summary on a number of successful accomplishments that have been made recently in overcoming this problem is presented. Subsequently, we shall propose an economical framework that is suitable for real time generation and transmission of holographic video signals over existing distribution media. The proposed framework includes an aspect of extending the depth range of the object scene, which is important for the display of large-scale objects. [Figure not available: see fulltext.

  2. Note: Reliable and non-contact 6D motion tracking system based on 2D laser scanners for cargo transportation.

    PubMed

    Kim, Young-Keun; Kim, Kyung-Soo

    2014-10-01

    Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-based sensor, the system is expected to be highly robust to sea weather conditions.

  3. Note: Reliable and non-contact 6D motion tracking system based on 2D laser scanners for cargo transportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Young-Keun, E-mail: ykkim@handong.edu; Kim, Kyung-Soo

    Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-basedmore » sensor, the system is expected to be highly robust to sea weather conditions.« less

  4. Note: Reliable and non-contact 6D motion tracking system based on 2D laser scanners for cargo transportation

    NASA Astrophysics Data System (ADS)

    Kim, Young-Keun; Kim, Kyung-Soo

    2014-10-01

    Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-based sensor, the system is expected to be highly robust to sea weather conditions.

  5. Efficient Spatiotemporal Clutter Rejection and Nonlinear Filtering-based Dim Resolved and Unresolved Object Tracking Algorithms

    NASA Astrophysics Data System (ADS)

    Tartakovsky, A.; Tong, M.; Brown, A. P.; Agh, C.

    2013-09-01

    We develop efficient spatiotemporal image processing algorithms for rejection of non-stationary clutter and tracking of multiple dim objects using non-linear track-before-detect methods. For clutter suppression, we include an innovative image alignment (registration) algorithm. The images are assumed to contain elements of the same scene, but taken at different angles, from different locations, and at different times, with substantial clutter non-stationarity. These challenges are typical for space-based and surface-based IR/EO moving sensors, e.g., highly elliptical orbit or low earth orbit scenarios. The algorithm assumes that the images are related via a planar homography, also known as the projective transformation. The parameters are estimated in an iterative manner, at each step adjusting the parameter vector so as to achieve improved alignment of the images. Operating in the parameter space rather than in the coordinate space is a new idea, which makes the algorithm more robust with respect to noise as well as to large inter-frame disturbances, while operating at real-time rates. For dim object tracking, we include new advancements to a particle non-linear filtering-based track-before-detect (TrbD) algorithm. The new TrbD algorithm includes both real-time full image search for resolved objects not yet in track and joint super-resolution and tracking of individual objects in closely spaced object (CSO) clusters. The real-time full image search provides near-optimal detection and tracking of multiple extremely dim, maneuvering objects/clusters. The super-resolution and tracking CSO TrbD algorithm provides efficient near-optimal estimation of the number of unresolved objects in a CSO cluster, as well as the locations, velocities, accelerations, and intensities of the individual objects. We demonstrate that the algorithm is able to accurately estimate the number of CSO objects and their locations when the initial uncertainty on the number of objects is large. We demonstrate performance of the TrbD algorithm both for satellite-based and surface-based EO/IR surveillance scenarios.

  6. Vorticity and helicity decompositions and dynamics with real Schur form of the velocity gradient

    NASA Astrophysics Data System (ADS)

    Zhu, Jian-Zhou

    2018-03-01

    The real Schur form (RSF) of a generic velocity gradient field ∇u is exploited to expose the structures of flows, in particular, our field decomposition resulting in two vorticities with only mutual linkage as the topological content of the global helicity (accordingly decomposed into two equal parts). The local transformation to the RSF may indicate alternative (co)rotating frame(s) for specifying the objective argument(s) of the constitutive equation. When ∇u is uniformly of RSF in a fixed Cartesian coordinate frame, i.e., ux = ux(x, y) and uy = uy(x, y), but uz = uz(x, y, z), the model, with the decomposed vorticities both frozen-in to u, is for two-component-two-dimensional-coupled-with-one-component-three-dimensional flows in between two-dimensional-three-component (2D3C) and fully three-dimensional-three-component ones and may help curing the pathology in the helical 2D3C absolute equilibrium, making the latter effectively work in more realistic situations.

  7. Tangible imaging systems

    NASA Astrophysics Data System (ADS)

    Ferwerda, James A.

    2013-03-01

    We are developing tangible imaging systems1-4 that enable natural interaction with virtual objects. Tangible imaging systems are based on consumer mobile devices that incorporate electronic displays, graphics hardware, accelerometers, gyroscopes, and digital cameras, in laptop or tablet-shaped form-factors. Custom software allows the orientation of a device and the position of the observer to be tracked in real-time. Using this information, realistic images of threedimensional objects with complex textures and material properties are rendered to the screen, and tilting or moving in front of the device produces realistic changes in surface lighting and material appearance. Tangible imaging systems thus allow virtual objects to be observed and manipulated as naturally as real ones with the added benefit that object properties can be modified under user control. In this paper we describe four tangible imaging systems we have developed: the tangiBook - our first implementation on a laptop computer; tangiView - a more refined implementation on a tablet device; tangiPaint - a tangible digital painting application; and phantoView - an application that takes the tangible imaging concept into stereoscopic 3D.

  8. Virtual integral holography

    NASA Astrophysics Data System (ADS)

    Venolia, Dan S.; Williams, Lance

    1990-08-01

    A range of stereoscopic display technologies exist which are no more intrusive, to the user, than a pair of spectacles. Combining such a display system with sensors for the position and orientation of the user's point-of-view results in a greatly enhanced depiction of three-dimensional data. As the point of view changes, the stereo display channels are updated in real time. The face of a monitor or display screen becomes a window on a three-dimensional scene. Motion parallax naturally conveys the placement and relative depth of objects in the field of view. Most of the advantages of "head-mounted display" technology are achieved with a less cumbersome system. To derive the full benefits of stereo combined with motion parallax, both stereo channels must be updated in real time. This may limit the size and complexity of data bases which can be viewed on processors of modest resources, and restrict the use of additional three-dimensional cues, such as texture mapping, depth cueing, and hidden surface elimination. Effective use of "full 3D" may still be undertaken in a non-interactive mode. Integral composite holograms have often been advanced as a powerful 3D visualization tool. Such a hologram is typically produced from a film recording of an object on a turntable, or a computer animation of an object rotating about one axis. The individual frames of film are multiplexed, in a composite hologram, in such a way as to be indexed by viewing angle. The composite may be produced as a cylinder transparency, which provides a stereo view of the object as if enclosed within the cylinder, which can be viewed from any angle. No vertical parallax is usually provided (this would require increasing the dimensionality of the multiplexing scheme), but the three dimensional image is highly resolved and easy to view and interpret. Even a modest processor can duplicate the effect of such a precomputed display, provided sufficient memory and bus bandwidth. This paper describes the components of a stereo display system with user point-of-view tracking for interactive 3D, and a digital realization of integral composite display which we term virtual integral holography. The primary drawbacks of holographic display - film processing turnaround time, and the difficulties of displaying scenes in full color -are obviated, and motion parallax cues provide easy 3D interpretation even for users who cannot see in stereo.

  9. Ray-based approach to integrated 3D visual communication

    NASA Astrophysics Data System (ADS)

    Naemura, Takeshi; Harashima, Hiroshi

    2001-02-01

    For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.

  10. Optical 3D-coordinate measuring system using structured light

    NASA Astrophysics Data System (ADS)

    Schreiber, Wolfgang; Notni, Gunther; Kuehmstedt, Peter; Gerber, Joerg; Kowarschik, Richard M.

    1996-09-01

    The paper is aimed at the description of an optical shape measuring technique based on a consistent principle using fringe projection technique. We demonstrate a real 3D- coordinate measuring system where the sale of coordinates is given only by the illumination-structures. This method has the advantages that the aberration of the observing system and the depth-dependent imaging scale have no influence on the measuring accuracy and, moreover, the measurements are independent of the position of the camera with respect to the object under test. Furthermore, it is shown that the influence of specular effects of the surface on the measuring result can be eliminated. Moreover, we developed a very simple algorithm to calibrate the measuring system. The measuring examples show that a measuring accuracy of 10-4 (i.e. 10 micrometers ) within an object volume of 100 X 100 X 70 mm3 is achievable. Furthermore, it is demonstrated that the set of coordinate values can be processed in CNC- and CAD-systems.

  11. Three-dimensional liver motion tracking using real-time two-dimensional MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brix, Lau, E-mail: lau.brix@stab.rm.dk; Ringgaard, Steffen; Sørensen, Thomas Sangild

    2014-04-15

    Purpose: Combined magnetic resonance imaging (MRI) systems and linear accelerators for radiotherapy (MR-Linacs) are currently under development. MRI is noninvasive and nonionizing and can produce images with high soft tissue contrast. However, new tracking methods are required to obtain fast real-time spatial target localization. This study develops and evaluates a method for tracking three-dimensional (3D) respiratory liver motion in two-dimensional (2D) real-time MRI image series with high temporal and spatial resolution. Methods: The proposed method for 3D tracking in 2D real-time MRI series has three steps: (1) Recording of a 3D MRI scan and selection of a blood vessel (ormore » tumor) structure to be tracked in subsequent 2D MRI series. (2) Generation of a library of 2D image templates oriented parallel to the 2D MRI image series by reslicing and resampling the 3D MRI scan. (3) 3D tracking of the selected structure in each real-time 2D image by finding the template and template position that yield the highest normalized cross correlation coefficient with the image. Since the tracked structure has a known 3D position relative to each template, the selection and 2D localization of a specific template translates into quantification of both the through-plane and in-plane position of the structure. As a proof of principle, 3D tracking of liver blood vessel structures was performed in five healthy volunteers in two 5.4 Hz axial, sagittal, and coronal real-time 2D MRI series of 30 s duration. In each 2D MRI series, the 3D localization was carried out twice, using nonoverlapping template libraries, which resulted in a total of 12 estimated 3D trajectories per volunteer. Validation tests carried out to support the tracking algorithm included quantification of the breathing induced 3D liver motion and liver motion directionality for the volunteers, and comparison of 2D MRI estimated positions of a structure in a watermelon with the actual positions. Results: Axial, sagittal, and coronal 2D MRI series yielded 3D respiratory motion curves for all volunteers. The motion directionality and amplitude were very similar when measured directly as in-plane motion or estimated indirectly as through-plane motion. The mean peak-to-peak breathing amplitude was 1.6 mm (left-right), 11.0 mm (craniocaudal), and 2.5 mm (anterior-posterior). The position of the watermelon structure was estimated in 2D MRI images with a root-mean-square error of 0.52 mm (in-plane) and 0.87 mm (through-plane). Conclusions: A method for 3D tracking in 2D MRI series was developed and demonstrated for liver tracking in volunteers. The method would allow real-time 3D localization with integrated MR-Linac systems.« less

  12. Techniques for efficient, real-time, 3D visualization of multi-modality cardiac data using consumer graphics hardware.

    PubMed

    Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr

    2005-09-01

    We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.

  13. Objects Architecture: A Comprehensive Design Approach for Real-Time, Distributed, Fault-Tolerant, Reactive Operating Systems.

    DTIC Science & Technology

    1987-09-01

    real - time operating system should be efficient from the real-time point...5,8]) system naming scheme. 3.2 Protecting Objects Real-time embedded systems usually neglect protection mechanisms. However, a real - time operating system cannot...allocation mechanism should adhere to application constraints. This strong relationship between a real - time operating system and the application

  14. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    PubMed Central

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  15. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-07-25

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  16. Improvement of Accuracy in Environmental Dosimetry by TLD Cards Using Three-dimensional Calibration Method

    PubMed Central

    HosseiniAliabadi, S. J.; Hosseini Pooya, S. M.; Afarideh, H.; Mianji, F.

    2015-01-01

    Introduction The angular dependency of response for TLD cards may cause deviation from its true value on the results of environmental dosimetry, since TLDs may be exposed to radiation at different angles of incidence from the surrounding area. Objective A 3D setting of TLD cards has been calibrated isotropically in a standard radiation field to evaluate the improvement of the accuracy of measurement for environmental dosimetry. Method Three personal TLD cards were rectangularly placed in a cylindrical holder, and calibrated using 1D and 3D calibration methods. Then, the dosimeter has been used simultaneously with a reference instrument in a real radiation field measuring the accumulated dose within a time interval. Result The results show that the accuracy of measurement has been improved by 6.5% using 3D calibration factor in comparison with that of normal 1D calibration method. Conclusion This system can be utilized in large scale environmental monitoring with a higher accuracy. PMID:26157729

  17. Sequence design and software environment for real-time navigation of a wireless ferromagnetic device using MRI system and single echo 3D tracking.

    PubMed

    Chanu, A; Aboussouan, E; Tamaz, S; Martel, S

    2006-01-01

    Software architecture for the navigation of a ferromagnetic untethered device in a 1D and 2D phantom environment is briefly described. Navigation is achieved using the real-time capabilities of a Siemens 1.5 T Avanto MRI system coupled with a dedicated software environment and a specially developed 3D tracking pulse sequence. Real-time control of the magnetic core is executed through the implementation of a simple PID controller. 1D and 2D experimental results are presented.

  18. Advanced Visualization of Experimental Data in Real Time Using LiveView3D

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    LiveView3D is a software application that imports and displays a variety of wind tunnel derived data in an interactive virtual environment in real time. LiveView3D combines the use of streaming video fed into a three-dimensional virtual representation of the test configuration with networked communications to the test facility Data Acquisition System (DAS). This unified approach to real time data visualization provides a unique opportunity to comprehend very large sets of diverse forms of data in a real time situation, as well as in post-test analysis. This paper describes how LiveView3D has been implemented to visualize diverse forms of aerodynamic data gathered during wind tunnel experiments, most notably at the NASA Langley Research Center Unitary Plan Wind Tunnel (UPWT). Planned future developments of the LiveView3D system are also addressed.

  19. Demonstration of a real-time implementation of the ICVision holographic stereogram display

    NASA Astrophysics Data System (ADS)

    Kulick, Jeffrey H.; Jones, Michael W.; Nordin, Gregory P.; Lindquist, Robert G.; Kowel, Stephen T.; Thomsen, Axel

    1995-07-01

    There is increasing interest in real-time autostereoscopic 3D displays. Such systems allow 3D objects or scenes to be viewed by one or more observers with correct motion parallax without the need for glasses or other viewing aids. Potential applications of such systems include mechanical design, training and simulation, medical imaging, virtual reality, and architectural design. One approach to the development of real-time autostereoscopic display systems has been to develop real-time holographic display systems. The approach taken by most of the systems is to compute and display a number of holographic lines at one time, and then use a scanning system to replicate the images throughout the display region. The approach taken in the ICVision system being developed at the University of Alabama in Huntsville is very different. In the ICVision display, a set of discrete viewing regions called virtual viewing slits are created by the display. Each pixel is required fill every viewing slit with different image data. When the images presented in two virtual viewing slits separated by an interoccular distance are filled with stereoscopic pair images, the observer sees a 3D image. The images are computed so that a different stereo pair is presented each time the viewer moves 1 eye pupil diameter (approximately mm), thus providing a series of stereo views. Each pixel is subdivided into smaller regions, called partial pixels. Each partial pixel is filled with a diffraction grating that is just that required to fill an individual virtual viewing slit. The sum of all the partial pixels in a pixel then fill all the virtual viewing slits. The final version of the ICVision system will form diffraction gratings in a liquid crystal layer on the surface of VLSI chips in real time. Processors embedded in the VLSI chips will compute the display in real- time. In the current version of the system, a commercial AMLCD is sandwiched with a diffraction grating array. This paper will discuss the design details of a protable 3D display based on the integration of a diffractive optical element with a commercial off-the-shelf AMLCD. The diffractive optic contains several hundred thousand partial-pixel gratings and the AMLCD modulates the light diffracted by the gratings.

  20. Real-time 3D video compression for tele-immersive environments

    NASA Astrophysics Data System (ADS)

    Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William

    2006-01-01

    Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).

  1. NATUREYES: Development of surveillance monitoring system for open spaces based on the analysis of different optical sensors data and environmental conditions

    NASA Astrophysics Data System (ADS)

    Molina-Jimenez, Teresa; Caballero-Aroca, Jose; Simón-Martín, Santiago; Hervás-Juan, Juan; García-Martínez, Jose-David; Pérez-Picazo, Emilio; Dolz-García, Ramón; Pons-Vila, Alejandro; Quintana-Rumbau, Salvador; Valiente Pardo, Jose Antonio; Estrela, Maria José; Pastor-Guzmán, Francisco

    2005-09-01

    We present results of a R&D project aimed to produce an environmental surveillance system that, working in wild areas, allows for a real-time observation and control of some ambient factors that could produce a natural disaster. The main objective of the project is the development of an open platform capable to work with several kinds of sensors, in order to adapt itself to the needs of each situation. The detection of environmental risks and management of this data to give a real-time response is the overall objective of the project. The main parts of the system are: 1.- Detection system: capable to perform real-time data and image communication, fully autonomous and designed to consider the environmental conditions. 2.- Alarm management headquaters: reception on real-time of data from the detector network. All the data is analysed to enable a decision about whether there is or not an alarm situation. 3.- Mobile alarm-reception system: portable system for reception of the alarm signal from the headquaters. The project was financed by the Science and Technology Ministry, National Research and Development Programme (TIC2000-0366-P4, 2001-2004).

  2. Template-Based 3D Reconstruction of Non-rigid Deformable Object from Monocular Video

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Peng, Xiaodong; Zhou, Wugen; Liu, Bo; Gerndt, Andreas

    2018-06-01

    In this paper, we propose a template-based 3D surface reconstruction system of non-rigid deformable objects from monocular video sequence. Firstly, we generate a semi-dense template of the target object with structure from motion method using a subsequence video. This video can be captured by rigid moving camera orienting the static target object or by a static camera observing the rigid moving target object. Then, with the reference template mesh as input and based on the framework of classical template-based methods, we solve an energy minimization problem to get the correspondence between the template and every frame to get the time-varying mesh to present the deformation of objects. The energy terms combine photometric cost, temporal and spatial smoothness cost as well as as-rigid-as-possible cost which can enable elastic deformation. In this paper, an easy and controllable solution to generate the semi-dense template for complex objects is presented. Besides, we use an effective iterative Schur based linear solver for the energy minimization problem. The experimental evaluation presents qualitative deformation objects reconstruction results with real sequences. Compare against the results with other templates as input, the reconstructions based on our template have more accurate and detailed results for certain regions. The experimental results show that the linear solver we used performs better efficiency compared to traditional conjugate gradient based solver.

  3. Using Parameters of Dynamic Pulse Function for 3d Modeling in LOD3 Based on Random Textures

    NASA Astrophysics Data System (ADS)

    Alizadehashrafi, B.

    2015-12-01

    The pulse function (PF) is a technique based on procedural preprocessing system to generate a computerized virtual photo of the façade with in a fixed size square(Alizadehashrafi et al., 2009, Musliman et al., 2010). Dynamic Pulse Function (DPF) is an enhanced version of PF which can create the final photo, proportional to real geometry. This can avoid distortion while projecting the computerized photo on the generated 3D model(Alizadehashrafi and Rahman, 2013). The challenging issue that might be handled for having 3D model in LoD3 rather than LOD2, is the final aim that have been achieved in this paper. In the technique based DPF the geometries of the windows and doors are saved in an XML file schema which does not have any connections with the 3D model in LoD2 and CityGML format. In this research the parameters of Dynamic Pulse Functions are utilized via Ruby programming language in SketchUp Trimble to generate (exact position and deepness) the windows and doors automatically in LoD3 based on the same concept of DPF. The advantage of this technique is automatic generation of huge number of similar geometries e.g. windows by utilizing parameters of DPF along with defining entities and window layers. In case of converting the SKP file to CityGML via FME software or CityGML plugins the 3D model contains the semantic database about the entities and window layers which can connect the CityGML to MySQL(Alizadehashrafi and Baig, 2014). The concept behind DPF, is to use logical operations to project the texture on the background image which is dynamically proportional to real geometry. The process of projection is based on two vertical and horizontal dynamic pulses starting from upper-left corner of the background wall in down and right directions respectively based on image coordinate system. The logical one/zero on the intersections of two vertical and horizontal dynamic pulses projects/does not project the texture on the background image. It is possible to define priority for each layer. For instance the priority of the door layer can be higher than window layer which means that window texture cannot be projected on the door layer. Orthogonal and rectified perpendicular symmetric photos of the 3D objects that are proportional to the real façade geometry must be utilized for the generation of the output frame for DPF. The DPF produces very high quality and small data size of output image files in quite smaller dimension compare with the photorealistic texturing method. The disadvantage of DPF is its preprocessing method to generate output image file rather than online processing to generate the texture within the 3D environment such as CityGML. Furthermore the result of DPF can be utilized for 3D model in LOD2 rather than LOD3. In the current work the random textures of the window layers are created based on parameters of DPF within Ruby console of SketchUp Trimble to generate the deeper geometries of the windows and their exact position on the façade automatically along with random textures to increase Level of Realism (LoR)(Scarpino, 2010). As the output frame in DPF is proportional to real geometry (height and width of the façade) it is possible to query the XML database and convert them to units such as meter automatically. In this technique, the perpendicular terrestrial photo from the façade is rectified by employing projective transformation based on the frame which is in constrain proportion to real geometry. The rectified photos which are not suitable for texturing but necessary for measuring, can be resized in constrain proportion to real geometry before measuring process. Height and width of windows, doors, horizontal and vertical distance between windows from upper left corner of the photo dimensions of doors and windows are parameters that should be measured to run the program as a plugins in SketchUp Trimble. The system can use these parameters and texture file names and file paths to create the façade semi-automatically. To avoid leaning geometry the textures of windows, doors and etc, should be cropped and rectified from perpendicular photos, so that they can be used in the program to create the whole façade along with its geometries. Texture enhancement should be done in advance such as removing disturbing objects, exposure setting, left-right up-down transformation, and so on. In fact, the quality, small data size, scale and semantic database for each façade are the prominent advantages of this method.

  4. Towards real-time MRI-guided 3D localization of deforming targets for non-invasive cardiac radiosurgery

    NASA Astrophysics Data System (ADS)

    Ipsen, S.; Blanck, O.; Lowther, N. J.; Liney, G. P.; Rai, R.; Bode, F.; Dunst, J.; Schweikard, A.; Keall, P. J.

    2016-11-01

    Radiosurgery to the pulmonary vein antrum in the left atrium (LA) has recently been proposed for non-invasive treatment of atrial fibrillation (AF). Precise real-time target localization during treatment is necessary due to complex respiratory and cardiac motion and high radiation doses. To determine the 3D position of the LA for motion compensation during radiosurgery, a tracking method based on orthogonal real-time MRI planes was developed for AF treatments with an MRI-guided radiotherapy system. Four healthy volunteers underwent cardiac MRI of the LA. Contractile motion was quantified on 3D LA models derived from 4D scans with 10 phases acquired in end-exhalation. Three localization strategies were developed and tested retrospectively on 2D real-time scans (sagittal, temporal resolution 100 ms, free breathing). The best-performing method was then used to measure 3D target positions in 2D-2D orthogonal planes (sagittal-coronal, temporal resolution 200-252 ms, free breathing) in 20 configurations of a digital phantom and in the volunteer data. The 3D target localization accuracy was quantified in the phantom and qualitatively assessed in the real data. Mean cardiac contraction was  ⩽  3.9 mm between maximum dilation and contraction but anisotropic. A template matching approach with two distinct template phases and ECG-based selection yielded the highest 2D accuracy of 1.2 mm. 3D target localization showed a mean error of 3.2 mm in the customized digital phantoms. Our algorithms were successfully applied to the 2D-2D volunteer data in which we measured a mean 3D LA motion extent of 16.5 mm (SI), 5.8 mm (AP) and 3.1 mm (LR). Real-time target localization on orthogonal MRI planes was successfully implemented for highly deformable targets treated in cardiac radiosurgery. The developed method measures target shifts caused by respiration and cardiac contraction. If the detected motion can be compensated accordingly, an MRI-guided radiotherapy system could potentially enable completely non-invasive treatment of AF.

  5. Microscopic 3D measurement of dynamic scene using optimized pulse-width-modulation binary fringe

    NASA Astrophysics Data System (ADS)

    Hu, Yan; Chen, Qian; Feng, Shijie; Tao, Tianyang; Li, Hui; Zuo, Chao

    2017-10-01

    Microscopic 3-D shape measurement can supply accurate metrology of the delicacy and complexity of MEMS components of the final devices to ensure their proper performance. Fringe projection profilometry (FPP) has the advantages of noncontactness and high accuracy, making it widely used in 3-D measurement. Recently, tremendous advance of electronics development promotes 3-D measurements to be more accurate and faster. However, research about real-time microscopic 3-D measurement is still rarely reported. In this work, we effectively combine optimized binary structured pattern with number-theoretical phase unwrapping algorithm to realize real-time 3-D shape measurement. A slight defocusing of our proposed binary patterns can considerably alleviate the measurement error based on phase-shifting FPP, making the binary patterns have the comparable performance with ideal sinusoidal patterns. Real-time 3-D measurement about 120 frames per second (FPS) is achieved, and experimental result of a vibrating earphone is presented.

  6. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    NASA Astrophysics Data System (ADS)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  7. 3D pose estimation and motion analysis of the articulated human hand-forearm limb in an industrial production environment

    NASA Astrophysics Data System (ADS)

    Hahn, Markus; Barrois, Björn; Krüger, Lars; Wöhler, Christian; Sagerer, Gerhard; Kummert, Franz

    2010-09-01

    This study introduces an approach to model-based 3D pose estimation and instantaneous motion analysis of the human hand-forearm limb in the application context of safe human-robot interaction. 3D pose estimation is performed using two approaches: The Multiocular Contracting Curve Density (MOCCD) algorithm is a top-down technique based on pixel statistics around a contour model projected into the images from several cameras. The Iterative Closest Point (ICP) algorithm is a bottom-up approach which uses a motion-attributed 3D point cloud to estimate the object pose. Due to their orthogonal properties, a fusion of these algorithms is shown to be favorable. The fusion is performed by a weighted combination of the extracted pose parameters in an iterative manner. The analysis of object motion is based on the pose estimation result and the motion-attributed 3D points belonging to the hand-forearm limb using an extended constraint-line approach which does not rely on any temporal filtering. A further refinement is obtained using the Shape Flow algorithm, a temporal extension of the MOCCD approach, which estimates the temporal pose derivative based on the current and the two preceding images, corresponding to temporal filtering with a short response time of two or at most three frames. Combining the results of the two motion estimation stages provides information about the instantaneous motion properties of the object. Experimental investigations are performed on real-world image sequences displaying several test persons performing different working actions typically occurring in an industrial production scenario. In all example scenes, the background is cluttered, and the test persons wear various kinds of clothes. For evaluation, independently obtained ground truth data are used. [Figure not available: see fulltext.

  8. 3DSEM++: Adaptive and intelligent 3D SEM surface reconstruction.

    PubMed

    Tafti, Ahmad P; Holz, Jessica D; Baghaie, Ahmadreza; Owen, Heather A; He, Max M; Yu, Zeyun

    2016-08-01

    Structural analysis of microscopic objects is a longstanding topic in several scientific disciplines, such as biological, mechanical, and materials sciences. The scanning electron microscope (SEM), as a promising imaging equipment has been around for decades to determine the surface properties (e.g., compositions or geometries) of specimens by achieving increased magnification, contrast, and resolution greater than one nanometer. Whereas SEM micrographs still remain two-dimensional (2D), many research and educational questions truly require knowledge and facts about their three-dimensional (3D) structures. 3D surface reconstruction from SEM images leads to remarkable understanding of microscopic surfaces, allowing informative and qualitative visualization of the samples being investigated. In this contribution, we integrate several computational technologies including machine learning, contrario methodology, and epipolar geometry to design and develop a novel and efficient method called 3DSEM++ for multi-view 3D SEM surface reconstruction in an adaptive and intelligent fashion. The experiments which have been performed on real and synthetic data assert the approach is able to reach a significant precision to both SEM extrinsic calibration and its 3D surface modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. From SFM to 3d Print: Automated Workflow Addressed to Practitioner Aimed at the Conservation and Restauration

    NASA Astrophysics Data System (ADS)

    Inzerillo, L.; Di Paola, F.

    2017-08-01

    In In the last years there has been an increasing use of digital techniques for conservation and restoration purposes. Among these, a very dominant rule is played by the use of digital photogrammetry packages (Agisoft Photoscan, 3D Zephir) which allow to obtain in few steps 3D textured models of real objects. Combined with digital documentation technologies digital fabrication technologies can be employed in a variety of ways to assist in heritage documentation, conservation and dissemination. This paper will give to practitioners an overview on the state of the art available technologies and a feasible workflow for optimizing point cloud and polygon mesh datasets for the purpose of fabrication using 3D printing. The goal is to give an important contribute to confer an automation aspect at the whole processing. We tried to individuate a workflow that should be applicable to several types of cases apart from small precautions. In our experimentation we used a DELTA WASP 2040 printer with PLA easyfil.

  10. A 3D visualization and simulation of the individual human jaw.

    PubMed

    Muftić, Osman; Keros, Jadranka; Baksa, Sarajko; Carek, Vlado; Matković, Ivo

    2003-01-01

    A new biomechanical three-dimensional (3D) model for the human mandible based on computer-generated virtual model is proposed. Using maps obtained from the special kinds of photos of the face of the real subject, it is possible to attribute personality to the virtual character, while computer animation offers movements and characteristics within the confines of space and time of the virtual world. A simple two-dimensional model of the jaw cannot explain the biomechanics, where the muscular forces through occlusion and condylar surfaces are in the state of 3D equilibrium. In the model all forces are resolved into components according to a selected coordinate system. The muscular forces act on the jaw, along with the necessary force level for chewing as some kind of mandible balance, preventing dislocation and loading of nonarticular tissues. In the work is used new approach to computer-generated animation of virtual 3D characters (called "Body SABA"), using in one object package of minimal costs and easy for operation.

  11. 3D printing of optical materials: an investigation of the microscopic properties

    NASA Astrophysics Data System (ADS)

    Persano, Luana; Cardarelli, Francesco; Arinstein, Arkadii; Uttiya, Sureeporn; Zussman, Eyal; Pisignano, Dario; Camposeo, Andrea

    2018-02-01

    3D printing technologies are currently enabling the fabrication of objects with complex architectures and tailored properties. In such framework, the production of 3D optical structures, which are typically based on optical transparent matrices, optionally doped with active molecular compounds and nanoparticles, is still limited by the poor uniformity of the printed structures. Both bulk inhomogeneities and surface roughness of the printed structures can negatively affect the propagation of light in 3D printed optical components. Here we investigate photopolymerization-based printing processes by laser confocal microscopy. The experimental method we developed allows the printing process to be investigated in-situ, with microscale spatial resolution, and in real-time. The modelling of the photo-polymerization kinetics allows the different polymerization regimes to be investigated and the influence of process variables to be rationalized. In addition, the origin of the factors limiting light propagation in printed materials are rationalized, with the aim of envisaging effective experimental strategies to improve optical properties of printed materials.

  12. Point-of-care testing: applications of 3D printing.

    PubMed

    Chan, Ho Nam; Tan, Ming Jun Andrew; Wu, Hongkai

    2017-08-08

    Point-of-care testing (POCT) devices fulfil a critical need in the modern healthcare ecosystem, enabling the decentralized delivery of imperative clinical strategies in both developed and developing worlds. To achieve diagnostic utility and clinical impact, POCT technologies are immensely dependent on effective translation from academic laboratories out to real-world deployment. However, the current research and development pipeline is highly bottlenecked owing to multiple restraints in material, cost, and complexity of conventionally available fabrication techniques. Recently, 3D printing technology has emerged as a revolutionary, industry-compatible method enabling cost-effective, facile, and rapid manufacturing of objects. This has allowed iterative design-build-test cycles of various things, from microfluidic chips to smartphone interfaces, that are geared towards point-of-care applications. In this review, we focus on highlighting recent works that exploit 3D printing in developing POCT devices, underscoring its utility in all analytical steps. Moreover, we also discuss key advantages of adopting 3D printing in the device development pipeline and identify promising opportunities in 3D printing technology that can benefit global health applications.

  13. Utilising E-on Vue and Unity 3D scenes to generate synthetic images and videos for visible signature analysis

    NASA Astrophysics Data System (ADS)

    Madden, Christopher S.; Richards, Noel J.; Culpepper, Joanne B.

    2016-10-01

    This paper investigates the ability to develop synthetic scenes in an image generation tool, E-on Vue, and a gaming engine, Unity 3D, which can be used to generate synthetic imagery of target objects across a variety of conditions in land environments. Developments within these tools and gaming engines have allowed the computer gaming industry to dramatically enhance the realism of the games they develop; however they utilise short cuts to ensure that the games run smoothly in real-time to create an immersive effect. Whilst these short cuts may have an impact upon the realism of the synthetic imagery, they do promise a much more time efficient method of developing imagery of different environmental conditions and to investigate the dynamic aspect of military operations that is currently not evaluated in signature analysis. The results presented investigate how some of the common image metrics used in target acquisition modelling, namely the Δμ1, Δμ2, Δμ3, RSS, and Doyle metrics, perform on the synthetic scenes generated by E-on Vue and Unity 3D compared to real imagery of similar scenes. An exploration of the time required to develop the various aspects of the scene to enhance its realism are included, along with an overview of the difficulties associated with trying to recreate specific locations as a virtual scene. This work is an important start towards utilising virtual worlds for visible signature evaluation, and evaluating how equivalent synthetic imagery is to real photographs.

  14. 5D Modelling: An Efficient Approach for Creating Spatiotemporal Predictive 3D Maps of Large-Scale Cultural Resources

    NASA Astrophysics Data System (ADS)

    Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.

    2015-08-01

    Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCityDB incorporating a PostgreSQL geo-database is used to manage and manipulate 3D data and their semantics.

  15. Multi-view and 3D deformable part models.

    PubMed

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ).

  16. Success Factors for Adoption of Real-Time Java

    DTIC Science & Technology

    2010-04-01

    use Java in an effective way in order to achieve objectives If d ’t l t ti f bj ti d d ’t lt i t l i 9© 2010 Atego. All rights reserved. you on p an o...order to effectively use object-oriented programming practices. Te ps d’execution des algos 400 450 500 550 600 650 700 n m s TacticalPicture...find all garbage, nor to defragment the available free pool Common operations may have surprising effects (e.g. entering a mutual exclusion region

  17. Target tracking and surveillance by fusing stereo and RFID information

    NASA Astrophysics Data System (ADS)

    Raza, Rana H.; Stockman, George C.

    2012-06-01

    Ensuring security in high risk areas such as an airport is an important but complex problem. Effectively tracking personnel, containers, and machines is a crucial task. Moreover, security and safety require understanding the interaction of persons and objects. Computer vision (CV) has been a classic tool; however, variable lighting, imaging, and random occlusions present difficulties for real-time surveillance, resulting in erroneous object detection and trajectories. Determining object ID via CV at any instance of time in a crowded area is computationally prohibitive, yet the trajectories of personnel and objects should be known in real time. Radio Frequency Identification (RFID) can be used to reliably identify target objects and can even locate targets at coarse spatial resolution, while CV provides fuzzy features for target ID at finer resolution. Our research demonstrates benefits obtained when most objects are "cooperative" by being RFID tagged. Fusion provides a method to simplify the correspondence problem in 3D space. A surveillance system can query for unique object ID as well as tag ID information, such as target height, texture, shape and color, which can greatly enhance scene analysis. We extend geometry-based tracking so that intermittent information on ID and location can be used in determining a set of trajectories of N targets over T time steps. We show that partial-targetinformation obtained through RFID can reduce computation time (by 99.9% in some cases) and also increase the likelihood of producing correct trajectories. We conclude that real-time decision-making should be possible if the surveillance system can integrate information effectively between the sensor level and activity understanding level.

  18. Initial clinical experience of real-time three-dimensional echocardiography in patients with ischemic and idiopathic dilated cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Shiota, T.; McCarthy, P. M.; White, R. D.; Qin, J. X.; Greenberg, N. L.; Flamm, S. D.; Wong, J.; Thomas, J. D.

    1999-01-01

    The geometry of the left ventricle in patients with cardiomyopathy is often sub-optimal for 2-dimensional ultrasound when assessing left ventricular (LV) function and localized abnormalities such as a ventricular aneurysm. The aim of this study was to report the initial experience of real-time 3-D echocardiography for evaluating patients with cardiomyopathy. A total of 34 patients were evaluated with the real-time 3D method in the operating room (n = 15) and in the echocardiographic laboratory (n = 19). Thirteen of 28 patients with cardiomyopathy and 6 other subjects with normal LV function were evaluated by both real-time 3-D echocardiography and magnetic resonance imaging (MRI) for obtaining LV volumes and ejection fractions for comparison. There were close relations and agreements for LV volumes (r = 0.98, p <0.0001, mean difference = -15 +/- 81 ml) and ejection fractions (r = 0.97, p <0.0001, mean difference = 0.001 +/- 0.04) between the real-time 3D method and MRI when 3 cardiomyopathy cases with marked LV dilatation (LV end-diastolic volume >450 ml by MRI) were excluded. In these 3 patients, 3D echocardiography significantly underestimated the LV volumes due to difficulties with imaging the entire LV in a 60 degrees x 60 degrees pyramidal volume. The new real-time 3D echocardiography is feasible in patients with cardiomyopathy and may provide a faster and lower cost alternative to MRI for evaluating cardiac function in patients.

  19. Real-time full-motion color Flash lidar for target detection and identification

    NASA Astrophysics Data System (ADS)

    Nelson, Roy; Coppock, Eric; Craig, Rex; Craner, Jeremy; Nicks, Dennis; von Niederhausern, Kurt

    2015-05-01

    Greatly improved understanding of areas and objects of interest can be gained when real time, full-motion Flash LiDAR is fused with inertial navigation data and multi-spectral context imagery. On its own, full-motion Flash LiDAR provides the opportunity to exploit the z dimension for improved intelligence vs. 2-D full-motion video (FMV). The intelligence value of this data is enhanced when it is combined with inertial navigation data to produce an extended, georegistered data set suitable for a variety of analysis. Further, when fused with multispectral context imagery the typical point cloud now becomes a rich 3-D scene which is intuitively obvious to the user and allows rapid cognitive analysis with little or no training. Ball Aerospace has developed and demonstrated a real-time, full-motion LIDAR system that fuses context imagery (VIS to MWIR demonstrated) and inertial navigation data in real time, and can stream these information-rich geolocated/fused 3-D scenes from an airborne platform. In addition, since the higher-resolution context camera is boresighted and frame synchronized to the LiDAR camera and the LiDAR camera is an array sensor, techniques have been developed to rapidly interpolate the LIDAR pixel values creating a point cloud that has the same resolution as the context camera, effectively creating a high definition (HD) LiDAR image. This paper presents a design overview of the Ball TotalSight™ LIDAR system along with typical results over urban and rural areas collected from both rotary and fixed-wing aircraft. We conclude with a discussion of future work.

  20. A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming

    2018-06-01

    This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.

  1. Beat to beat 3-dimensional intracardiac echocardiography: theoretical approach and practical experiences.

    PubMed

    Stapf, Daniel; Franke, Andreas; Schreckenberg, Marcus; Schummers, Georg; Mischke, Karl; Marx, Nikolaus; Schauerte, Patrick; Knackstedt, Christian

    2013-04-01

    Three-dimensional (3D)-imaging provides important information on cardiac anatomy during electrophysiological procedures. Real-time updates of modalities with high soft-tissue contrast are particularly advantageous during cardiac procedures. Therefore, a beat to beat 3D visualization of cardiac anatomy by intracardiac echocardiography (ICE) was developed and tested in phantoms and animals. An electronic phased-array 5-10 MHz ICE-catheter (Acuson, AcuNav/Siemens Medical Solutions USA/64 elements) providing a 90° sector image was used for ICE-imaging. A custom-made mechanical prototype controlled by a servo motor allowed automatic rotation of the ICE-catheter around its longitudinal axis. During a single heartbeat, the ICE-catheter was rotated and 2D-images were acquired. Reconstruction into a 3D volume and rendering by a prototype software was performed beat to beat. After experimental validation using a rigid phantom, the system was tested in an animal study and afterwards, for quantitative validation, in a dynamic phantom. Acquisition of beat to beat 3D-reconstruction was technically feasible. However, twisting of the ICE-catheter shaft due to friction and torsion was found and rotation was hampered. Also, depiction of catheters was not always ensured in case of parallel alignment. Using a curved sheath for depiction of cardiac anatomy there was no congruent depiction of shape and dimension of static and moving objects. Beat to beat 3D-ICE-imaging is feasible. However, shape and dimension of static and moving objects cannot always be displayed with necessary steadiness as needed in the clinical setting. As catheter depiction is also limited, clinical use seems impossible.

  2. UV-Assisted 3D Printing of Glass and Carbon Fiber-Reinforced Dual-Cure Polymer Composites

    PubMed Central

    Invernizzi, Marta; Natale, Gabriele; Levi, Marinella; Turri, Stefano; Griffini, Gianmarco

    2016-01-01

    Glass (GFR) and carbon fiber-reinforced (CFR) dual-cure polymer composites fabricated by UV-assisted three-dimensional (UV-3D) printing are presented. The resin material combines an acrylic-based photocurable resin with a low temperature (140 °C) thermally-curable resin system based on bisphenol A diglycidyl ether as base component, an aliphatic anhydride (hexahydro-4-methylphthalic anhydride) as hardener and (2,4,6,-tris(dimethylaminomethyl)phenol) as catalyst. A thorough rheological characterization of these formulations allowed us to define their 3D printability window. UV-3D printed macrostructures were successfully demonstrated, giving a clear indication of their potential use in real-life structural applications. Differential scanning calorimetry and dynamic mechanical analysis highlighted the good thermal stability and mechanical properties of the printed parts. In addition, uniaxial tensile tests were used to assess the fiber reinforcing effect on the UV-3D printed objects. Finally, an initial study was conducted on the use of a sizing treatment on carbon fibers to improve the fiber/matrix interfacial adhesion, giving preliminary indications on the potential of this approach to improve the mechanical properties of the 3D printed CFR components. PMID:28773704

  3. UV-Assisted 3D Printing of Glass and Carbon Fiber-Reinforced Dual-Cure Polymer Composites.

    PubMed

    Invernizzi, Marta; Natale, Gabriele; Levi, Marinella; Turri, Stefano; Griffini, Gianmarco

    2016-07-16

    Glass (GFR) and carbon fiber-reinforced (CFR) dual-cure polymer composites fabricated by UV-assisted three-dimensional (UV-3D) printing are presented. The resin material combines an acrylic-based photocurable resin with a low temperature (140 °C) thermally-curable resin system based on bisphenol A diglycidyl ether as base component, an aliphatic anhydride (hexahydro-4-methylphthalic anhydride) as hardener and (2,4,6,-tris(dimethylaminomethyl)phenol) as catalyst. A thorough rheological characterization of these formulations allowed us to define their 3D printability window. UV-3D printed macrostructures were successfully demonstrated, giving a clear indication of their potential use in real-life structural applications. Differential scanning calorimetry and dynamic mechanical analysis highlighted the good thermal stability and mechanical properties of the printed parts. In addition, uniaxial tensile tests were used to assess the fiber reinforcing effect on the UV-3D printed objects. Finally, an initial study was conducted on the use of a sizing treatment on carbon fibers to improve the fiber/matrix interfacial adhesion, giving preliminary indications on the potential of this approach to improve the mechanical properties of the 3D printed CFR components.

  4. Three-dimensional displacement measurement of image point by point-diffraction interferometry

    NASA Astrophysics Data System (ADS)

    He, Xiao; Chen, Lingfeng; Meng, Xiaojie; Yu, Lei

    2018-01-01

    This paper presents a method for measuring the three-dimensional (3-D) displacement of an image point based on point-diffraction interferometry. An object Point-light-source (PLS) interferes with a fixed PLS and its interferograms are captured by an exit pupil. When the image point of the object PLS is slightly shifted to a new position, the wavefront of the image PLS changes. And its interferograms also change. Processing these figures (captured before and after the movement), the wavefront difference of the image PLS can be obtained and it contains the information of three-dimensional (3-D) displacement of the image PLS. However, the information of its three-dimensional (3-D) displacement cannot be calculated until the distance between the image PLS and the exit pupil is calibrated. Therefore, we use a plane-parallel-plate with a known refractive index and thickness to determine this distance, which is based on the Snell's law for small angle of incidence. Thus, since the distance between the exit pupil and the image PLS is a known quantity, the 3-D displacement of the image PLS can be simultaneously calculated through two interference measurements. Preliminary experimental results indicate that its relative error is below 0.3%. With the ability to accurately locate an image point (whatever it is real or virtual), a fiber point-light-source can act as the reticle by itself in optical measurement.

  5. Documentation of Historical Underground Object in Skorkov Village with Selected Measuring Methods, Data Analysis and Visualization

    NASA Astrophysics Data System (ADS)

    Dlesk, A.

    2016-06-01

    The author analyzes current methods of 3D documentation of historical tunnels in Skorkov village, which lies at the Jizera river, approximately 30 km away from Prague. The area is known as a former military camp from Thirty Years' War in 17th Century. There is an extensive underground compound with one entrance corridor and two transverse, situated approximately 2 to 5 m under the local development. The object has been partly documented by geodetic polar method, intersection photogrammetry, image-based modelling and laser scanning. Data have been analyzed and methods have been compared. Then the 3D model of object has been created and compound with cadastral data, orthophoto, historical maps and digital surface model which was made by photogrammetric method using remotely piloted aircraft system. Then the measuring has been realized with ground penetrating radar. Data have been analyzed and the result compared with real status. All the data have been combined and visualized into one 3D model. Finally, the discussion about advantages and disadvantages of used measuring methods has been livened up. The tested methodology has been also used for other documentation of historical objects in this area. This project has been created as a part of research at EuroGV. s.r.o. Company lead by Ing. Karel Vach CSc. in cooperation with prof. Dr. Ing. Karel Pavelka from Czech Technical University in Prague and Miloš Gavenda, the renovator.

  6. Two- and three-dimensional transvaginal ultrasound with power Doppler angiography and gel infusion sonography for diagnosis of endometrial malignancy.

    PubMed

    Dueholm, M; Christensen, J W; Rydbjerg, S; Hansen, E S; Ørtoft, G

    2015-06-01

    To evaluate the diagnostic efficiency of two-dimensional (2D) and three-dimensional (3D) transvaginal ultrasonography, power Doppler angiography (PDA) and gel infusion sonography (GIS) at offline analysis for recognition of malignant endometrium compared with real-time evaluation during scanning, and to determine optimal image parameters at 3D analysis. One hundred and sixty-nine consecutive women with postmenopausal bleeding and endometrial thickness ≥ 5 mm underwent systematic evaluation of endometrial pattern on 2D imaging, and 2D videoclips and 3D volumes were later analyzed offline. Histopathological findings at hysteroscopy or hysterectomy were used as the reference standard. The efficiency of the different techniques for diagnosis of malignancy was calculated and compared. 3D image parameters, endometrial volume and 3D vascular indices were assessed. Optimal 3D image parameters were transformed by logistic regression into a risk of endometrial cancer (REC) score, including scores for body mass index, endometrial thickness and endometrial morphology at gray-scale and PDA and GIS. Offline 2D and 3D analysis were equivalent, but had lower diagnostic performance compared with real-time evaluation during scanning. Their diagnostic performance was not markedly improved by the addition of PDA or GIS, but their efficiency was comparable with that of real-time 2D-GIS in offline examinations of good image quality. On logistic regression, the 3D parameters from the REC-score system had the highest diagnostic efficiency. The area under the curve of the REC-score system at 3D-GIS (0.89) was not improved by inclusion of vascular indices or endometrial volume calculations. Real-time evaluation during scanning is most efficient, but offline 2D and 3D analysis is useful for prediction of endometrial cancer when good image quality can be obtained. The diagnostic efficiency at 3D analysis may be improved by use of REC-scoring systems, without the need for calculation of vascular indices or endometrial volume. The optimal imaging modality appears to be real-time 2D-GIS. Copyright © 2014 ISUOG. Published by John Wiley & Sons Ltd.

  7. Potential hazards of viewing 3-D stereoscopic television, cinema and computer games: a review.

    PubMed

    Howarth, Peter A

    2011-03-01

    The visual stimulus provided by a 3-D stereoscopic display differs from that of the real world because the image provided to each eye is produced on a flat surface. The distance from the screen to the eye remains fixed, providing a single focal distance, but the introduction of disparity between the images allows objects to be located geometrically in front of, or behind, the screen. Unlike in the real world, the stimulus to accommodation and the stimulus to convergence do not match. Although this mismatch is used positively in some forms of Orthoptic treatment, a number of authors have suggested that it could negatively lead to the development of asthenopic symptoms. From knowledge of the zone of clear, comfortable, single binocular vision one can predict that, for people with normal binocular vision, adverse symptoms will not be present if the discrepancy is small, but are likely if it is large, and that what constitutes 'large' and 'small' are idiosyncratic to the individual. The accommodation-convergence mismatch is not, however, the only difference between the natural and the artificial stimuli. In the former case, an object located in front of, or behind, a fixated object will not only be perceived as double if the images fall outside Panum's fusional areas, but it will also be defocused and blurred. In the latter case, however, it is usual for the producers of cinema, TV or computer game content to provide an image that is in focus over the whole of the display, and as a consequence diplopic images will be sharply in focus. The size of Panum's fusional area is spatial frequency-dependent, and because of this the high spatial frequencies present in the diplopic 3-D image will provide a different stimulus to the fusion system from that found naturally. © 2011 The College of Optometrists.

  8. PRIMAS: a real-time 3D motion-analysis system

    NASA Astrophysics Data System (ADS)

    Sabel, Jan C.; van Veenendaal, Hans L. J.; Furnee, E. Hans

    1994-03-01

    The paper describes a CCD TV-camera-based system for real-time multicamera 2D detection of retro-reflective targets and software for accurate and fast 3D reconstruction. Applications of this system can be found in the fields of sports, biomechanics, rehabilitation research, and various other areas of science and industry. The new feature of real-time 3D opens an even broader perspective of application areas; animations in virtual reality are an interesting example. After presenting an overview of the hardware and the camera calibration method, the paper focuses on the real-time algorithms used for matching of the images and subsequent 3D reconstruction of marker positions. When using a calibrated setup of two cameras, it is now possible to track at least ten markers at 100 Hz. Limitations in the performance are determined by the visibility of the markers, which could be improved by adding a third camera.

  9. Metrological characterization of 3D imaging devices

    NASA Astrophysics Data System (ADS)

    Guidi, G.

    2013-04-01

    Manufacturers often express the performance of a 3D imaging device in various non-uniform ways for the lack of internationally recognized standard requirements for metrological parameters able to identify the capability of capturing a real scene. For this reason several national and international organizations in the last ten years have been developing protocols for verifying such performance. Ranging from VDI/VDE 2634, published by the Association of German Engineers and oriented to the world of mechanical 3D measurements (triangulation-based devices), to the ASTM technical committee E57, working also on laser systems based on direct range detection (TOF, Phase Shift, FM-CW, flash LADAR), this paper shows the state of the art about the characterization of active range devices, with special emphasis on measurement uncertainty, accuracy and resolution. Most of these protocols are based on special objects whose shape and size are certified with a known level of accuracy. By capturing the 3D shape of such objects with a range device, a comparison between the measured points and the theoretical shape they should represent is possible. The actual deviations can be directly analyzed or some derived parameters can be obtained (e.g. angles between planes, distances between barycenters of spheres rigidly connected, frequency domain parameters, etc.). This paper shows theoretical aspects and experimental results of some novel characterization methods applied to different categories of active 3D imaging devices based on both principles of triangulation and direct range detection.

  10. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  11. 3D reconstruction of microminiature objects based on contour line

    NASA Astrophysics Data System (ADS)

    Li, Cailin; Wang, Qiang; Guo, Baoyun

    2009-10-01

    A new 3D automatic reconstruction method of micro solid of revolution is presented in this paper. In the implementation procedure of this method, image sequence of the solid of revolution of 360° is obtained, which rotation speed is controlled by motor precisely, in the rotate photographic mode of back light. Firstly, we need calibrate the height of turntable, the size of pixel and rotation axis of turntable. Then according to the calibration result of rotation axis, the height of turntable, rotation angle and the pixel size, the contour points of each image can be transformed into 3D points in the reference coordinate system to generate the point cloud model. Finally, the surface geometrical model of solid of revolution is obtained by using the relationship of two adjacent contours. Experimental results on real images are presented, which demonstrate the effectiveness of the Approach.

  12. Three-dimensional broadband acoustic illusion cloak for sound-hard boundaries of curved geometry

    PubMed Central

    Kan, Weiwei; Liang, Bin; Li, Ruiqi; Jiang, Xue; Zou, Xin-ye; Yin, Lei-lei; Cheng, Jianchun

    2016-01-01

    Acoustic illusion cloaks that create illusion effects by changing the scattered wave have many potential applications in a variety of scenarios. However, the experimental realization of generating three-dimensional (3D) acoustic illusions under detection of broadband signals still remains challenging despite the paramount importance for practical applications. Here we report the design and experimental demonstration of a 3D broadband cloak that can effectively manipulate the scattered field to generate the desired illusion effect near curved boundaries. The designed cloak simply comprises positive-index anisotropic materials, with parameters completely independent of either the cloaked object or the boundary. With the ability of manipulating the scattered field in 3D space and flexibility of applying to arbitrary geometries, our method may take a major step toward the real world application of acoustic cloaks and offer the possibilities of building advanced acoustic devices with versatile functionalities. PMID:27833141

  13. Quantum vision in three dimensions

    NASA Astrophysics Data System (ADS)

    Roth, Yehuda

    We present four models for describing a 3-D vision. Similar to the mirror scenario, our models allow 3-D vision with no need for additional accessories such as stereoscopic glasses or a hologram film. These four models are based on brain interpretation rather than pure objective encryption. We consider the observer "subjective" selection of a measuring device and the corresponding quantum collapse into one of his selected states, as a tool for interpreting reality in according to the observer concepts. This is the basic concept of our study and it is introduced in the first model. Other models suggests "soften" versions that might be much easier to implement. Our quantum interpretation approach contribute to the following fields. In technology the proposed models can be implemented into real devices, allowing 3-D vision without additional accessories. Artificial intelligence: In the desire to create a machine that exchange information by using human terminologies, our interpretation approach seems to be appropriate.

  14. A Multi-Component Automated Laser-Origami System for Cyber-Manufacturing

    NASA Astrophysics Data System (ADS)

    Ko, Woo-Hyun; Srinivasa, Arun; Kumar, P. R.

    2017-12-01

    Cyber-manufacturing systems can be enhanced by an integrated network architecture that is easily configurable, reliable, and scalable. We consider a cyber-physical system for use in an origami-type laser-based custom manufacturing machine employing folding and cutting of sheet material to manufacture 3D objects. We have developed such a system for use in a laser-based autonomous custom manufacturing machine equipped with real-time sensing and control. The basic elements in the architecture are built around the laser processing machine. They include a sensing system to estimate the state of the workpiece, a control system determining control inputs for a laser system based on the estimated data and user’s job requests, a robotic arm manipulating the workpiece in the work space, and middleware, named Etherware, supporting the communication among the systems. We demonstrate automated 3D laser cutting and bending to fabricate a 3D product as an experimental result.

  15. Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions

    PubMed Central

    Rose, Johann Christian; Kicherer, Anna; Wieland, Markus; Klingbeil, Lasse; Töpfer, Reinhard; Kuhlmann, Heiner

    2016-01-01

    In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter. PMID:27983669

  16. Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions.

    PubMed

    Rose, Johann Christian; Kicherer, Anna; Wieland, Markus; Klingbeil, Lasse; Töpfer, Reinhard; Kuhlmann, Heiner

    2016-12-15

    In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter.

  17. Visualizing 3D data obtained from microscopy on the Internet.

    PubMed

    Pittet, J J; Henn, C; Engel, A; Heymann, J B

    1999-01-01

    The Internet is a powerful communication medium increasingly exploited by business and science alike, especially in structural biology and bioinformatics. The traditional presentation of static two-dimensional images of real-world objects on the limited medium of paper can now be shown interactively in three dimensions. Many facets of this new capability have already been developed, particularly in the form of VRML (virtual reality modeling language), but there is a need to extend this capability for visualizing scientific data. Here we introduce a real-time isosurfacing node for VRML, based on the marching cube approach, allowing interactive isosurfacing. A second node does three-dimensional (3D) texture-based volume-rendering for a variety of representations. The use of computers in the microscopic and structural biosciences is extensive, and many scientific file formats exist. To overcome the problem of accessing such data from VRML and other tools, we implemented extensions to SGI's IFL (image format library). IFL is a file format abstraction layer defining communication between a program and a data file. These technologies are developed in support of the BioImage project, aiming to establish a database prototype for multidimensional microscopic data with the ability to view the data within a 3D interactive environment. Copyright 1999 Academic Press.

  18. Hybrid 2-D and 3-D Immersive and Interactive User Interface for Scientific Data Visualization

    DTIC Science & Technology

    2017-08-01

    visualization, 3-D interactive visualization, scientific visualization, virtual reality, real -time ray tracing 16. SECURITY CLASSIFICATION OF: 17...scientists to employ in the real world. Other than user-friendly software and hardware setup, scientists also need to be able to perform their usual...and scientific visualization communities mostly have different research priorities. For the VR community, the ability to support real -time user

  19. GENFIRE: A generalized Fourier iterative reconstruction algorithm for high-resolution 3D imaging

    DOE PAGES

    Pryor, Alan; Yang, Yongsoo; Rana, Arjun; ...

    2017-09-05

    Tomography has made a radical impact on diverse fields ranging from the study of 3D atomic arrangements in matter to the study of human health in medicine. Despite its very diverse applications, the core of tomography remains the same, that is, a mathematical method must be implemented to reconstruct the 3D structure of an object from a number of 2D projections. Here, we present the mathematical implementation of a tomographic algorithm, termed GENeralized Fourier Iterative REconstruction (GENFIRE), for high-resolution 3D reconstruction from a limited number of 2D projections. GENFIRE first assembles a 3D Fourier grid with oversampling and then iteratesmore » between real and reciprocal space to search for a global solution that is concurrently consistent with the measured data and general physical constraints. The algorithm requires minimal human intervention and also incorporates angular refinement to reduce the tilt angle error. We demonstrate that GENFIRE can produce superior results relative to several other popular tomographic reconstruction techniques through numerical simulations and by experimentally reconstructing the 3D structure of a porous material and a frozen-hydrated marine cyanobacterium. As a result, equipped with a graphical user interface, GENFIRE is freely available from our website and is expected to find broad applications across different disciplines.« less

  20. GENFIRE: A generalized Fourier iterative reconstruction algorithm for high-resolution 3D imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pryor, Alan; Yang, Yongsoo; Rana, Arjun

    Tomography has made a radical impact on diverse fields ranging from the study of 3D atomic arrangements in matter to the study of human health in medicine. Despite its very diverse applications, the core of tomography remains the same, that is, a mathematical method must be implemented to reconstruct the 3D structure of an object from a number of 2D projections. Here, we present the mathematical implementation of a tomographic algorithm, termed GENeralized Fourier Iterative REconstruction (GENFIRE), for high-resolution 3D reconstruction from a limited number of 2D projections. GENFIRE first assembles a 3D Fourier grid with oversampling and then iteratesmore » between real and reciprocal space to search for a global solution that is concurrently consistent with the measured data and general physical constraints. The algorithm requires minimal human intervention and also incorporates angular refinement to reduce the tilt angle error. We demonstrate that GENFIRE can produce superior results relative to several other popular tomographic reconstruction techniques through numerical simulations and by experimentally reconstructing the 3D structure of a porous material and a frozen-hydrated marine cyanobacterium. As a result, equipped with a graphical user interface, GENFIRE is freely available from our website and is expected to find broad applications across different disciplines.« less

  1. The ultrasound brain helmet: feasibility study of multiple simultaneous 3D scans of cerebral vasculature.

    PubMed

    Smith, Stephen W; Ivancevich, Nikolas M; Lindsey, Brooks D; Whitman, John; Light, Edward; Fronheiser, Matthew; Nicoletto, Heather A; Laskowitz, Daniel T

    2009-02-01

    We describe early stage experiments to test the feasibility of an ultrasound brain helmet to produce multiple simultaneous real-time three-dimensional (3D) scans of the cerebral vasculature from temporal and suboccipital acoustic windows of the skull. The transducer hardware and software of the Volumetrics Medical Imaging (Durham, NC, USA) real-time 3D scanner were modified to support dual 2.5 MHz matrix arrays of 256 transmit elements and 128 receive elements which produce two simultaneous 64 degrees pyramidal scans. The real-time display format consists of two coronal B-mode images merged into a 128 degrees sector, two simultaneous parasagittal images merged into a 128 degrees x 64 degrees C-mode plane and a simultaneous 64 degrees axial image. Real-time 3D color Doppler scans from a skull phantom with latex blood vessel were obtained after contrast agent injection as a proof of concept. The long-term goal is to produce real-time 3D ultrasound images of the cerebral vasculature from a portable unit capable of internet transmission thus enabling interactive 3D imaging, remote diagnosis and earlier therapeutic intervention. We are motivated by the urgency for rapid diagnosis of stroke due to the short time window of effective therapeutic intervention.

  2. Real-time 3-D space numerical shake prediction for earthquake early warning

    NASA Astrophysics Data System (ADS)

    Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang

    2017-12-01

    In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.

  3. Quantification of Left Ventricular Linear, Areal and Volumetric Dimensions: A Phantom and in Vivo Comparison of 2-D and Real-Time 3-D Echocardiography with Cardiovascular Magnetic Resonance.

    PubMed

    Polte, Christian L; Lagerstrand, Kerstin M; Gao, Sinsia A; Lamm, Carl R; Bech-Hanssen, Odd

    2015-07-01

    Two-dimensional echocardiography and real-time 3-D echocardiography have been reported to underestimate human left ventricular volumes significantly compared with cardiovascular magnetic resonance. We investigated the ability of 2-D echocardiography, real-time 3-D echocardiography and cardiovascular magnetic resonance to delineate dimensions of increasing complexity (diameter-area-volume) in a multimodality phantom model and in vivo, with the aim of elucidating the main cause of underestimation. All modalities were able to delineate phantom dimensions with high precision. In vivo, 2-D and real-time 3-D echocardiography underestimated short-axis end-diastolic linear and areal and all left ventricular volumetric dimensions significantly compared with cardiovascular magnetic resonance, but not short-axis end-systolic linear and areal dimensions. Underestimation increased successively from linear to volumetric left ventricular dimensions. When analyzed according to the same principles, 2-D and real-time 3-DE echocardiography provided similar left ventricular volumes. In conclusion, echocardiographic underestimation of left ventricular dimensions is due mainly to inherent technical differences in the ability to differentiate trabeculated from compact myocardium. Identical endocardial border definition criteria are needed to minimize differences between the modalities and to ensure better comparability in clinical practice. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  4. Strategy Planning Visualization Tool (SPVT) for the Air Operations Center (AOC) Volume I: SPVT Summary and COA Sketch

    DTIC Science & Technology

    2009-12-01

    Limitations of Real Time Battle Damage Assessment. [Thesis.] Maxwell AFB, AL: Air University. Shadbolt, N., Hall, W., Berners - Lee , T. (2006, May-June... Tim ) COA Development Use Case 3.7: User creates a new Course of Action (COA) User Story / Context of Use:  The JFACC may issue clear and...default, the timing of a Mission Analysis object will be r elative to the Operation’s Default tim ing (D-Day). If Use Case 3.24 is implem ented, then

  5. High-quality slab-based intermixing method for fusion rendering of multiple medical objects.

    PubMed

    Kim, Dong-Joon; Kim, Bohyoung; Lee, Jeongjin; Shin, Juneseuk; Kim, Kyoung Won; Shin, Yeong-Gil

    2016-01-01

    The visualization of multiple 3D objects has been increasingly required for recent applications in medical fields. Due to the heterogeneity in data representation or data configuration, it is difficult to efficiently render multiple medical objects in high quality. In this paper, we present a novel intermixing scheme for fusion rendering of multiple medical objects while preserving the real-time performance. First, we present an in-slab visibility interpolation method for the representation of subdivided slabs. Second, we introduce virtual zSlab, which extends an infinitely thin boundary (such as polygonal objects) into a slab with a finite thickness. Finally, based on virtual zSlab and in-slab visibility interpolation, we propose a slab-based visibility intermixing method with the newly proposed rendering pipeline. Experimental results demonstrate that the proposed method delivers more effective multiple-object renderings in terms of rendering quality, compared to conventional approaches. And proposed intermixing scheme provides high-quality intermixing results for the visualization of intersecting and overlapping surfaces by resolving aliasing and z-fighting problems. Moreover, two case studies are presented that apply the proposed method to the real clinical applications. These case studies manifest that the proposed method has the outstanding advantages of the rendering independency and reusability. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Left ventricular endocardial surface detection based on real-time 3D echocardiographic data

    NASA Technical Reports Server (NTRS)

    Corsi, C.; Borsari, M.; Consegnati, F.; Sarti, A.; Lamberti, C.; Travaglini, A.; Shiota, T.; Thomas, J. D.

    2001-01-01

    OBJECTIVE: A new computerized semi-automatic method for left ventricular (LV) chamber segmentation is presented. METHODS: The LV is imaged by real-time three-dimensional echocardiography (RT3DE). The surface detection model, based on level set techniques, is applied to RT3DE data for image analysis. The modified level set partial differential equation we use is solved by applying numerical methods for conservation laws. The initial conditions are manually established on some slices of the entire volume. The solution obtained for each slice is a contour line corresponding with the boundary between LV cavity and LV endocardium. RESULTS: The mathematical model has been applied to sequences of frames of human hearts (volume range: 34-109 ml) imaged by 2D and reconstructed off-line and RT3DE data. Volume estimation obtained by this new semi-automatic method shows an excellent correlation with those obtained by manual tracing (r = 0.992). Dynamic change of LV volume during the cardiac cycle is also obtained. CONCLUSION: The volume estimation method is accurate; edge based segmentation, image completion and volume reconstruction can be accomplished. The visualization technique also allows to navigate into the reconstructed volume and to display any section of the volume.

  7. Exploring interaction with 3D volumetric displays

    NASA Astrophysics Data System (ADS)

    Grossman, Tovi; Wigdor, Daniel; Balakrishnan, Ravin

    2005-03-01

    Volumetric displays generate true volumetric 3D images by actually illuminating points in 3D space. As a result, viewing their contents is similar to viewing physical objects in the real world. These displays provide a 360 degree field of view, and do not require the user to wear hardware such as shutter glasses or head-trackers. These properties make them a promising alternative to traditional display systems for viewing imagery in 3D. Because these displays have only recently been made available commercially (e.g., www.actuality-systems.com), their current use tends to be limited to non-interactive output-only display devices. To take full advantage of the unique features of these displays, however, it would be desirable if the 3D data being displayed could be directly interacted with and manipulated. We investigate interaction techniques for volumetric display interfaces, through the development of an interactive 3D geometric model building application. While this application area itself presents many interesting challenges, our focus is on the interaction techniques that are likely generalizable to interactive applications for other domains. We explore a very direct style of interaction where the user interacts with the virtual data using direct finger manipulations on and around the enclosure surrounding the displayed 3D volumetric image.

  8. A Real-World Study of Switching From Allopurinol to Febuxostat in a Health Plan Database

    PubMed Central

    Altan, Aylin; Shiozawa, Aki; Bancroft, Tim; Singh, Jasvinder A.

    2015-01-01

    Objective The objective of this study was to assess the real-world comparative effectiveness of continuing on allopurinol versus switching to febuxostat. Methods In a retrospective claims data study of enrollees in health plans affiliated with Optum, we evaluated patients from February 1, 2009, to May 31, 2012, with a gout diagnosis, a pharmacy claim for allopurinol or febuxostat, and at least 1 serum uric acid (SUA) result available during the follow-up period. Univariate and multivariable-adjusted analyses (controlling for patient demographics and clinical factors) assessed the likelihood of SUA lowering and achievement of target SUA of less than 6.0 mg/dL or less than 5.0 mg/dL in allopurinol continuers versus febuxostat switchers. Results The final study population included 748 subjects who switched to febuxostat from allopurinol and 4795 continuing users of allopurinol. The most common doses of allopurinol were 300 mg/d or less in 95% of allopurinol continuers and 93% of febuxostat switchers (prior to switching); the most common dose of febuxostat was 40 mg/d, in 77% of febuxostat switchers (after switching). Compared with allopurinol continuers, febuxostat switchers had greater (1) mean preindex SUA, 8.0 mg/dL versus 6.6 mg/dL (P < 0.001); (2) likelihood of postindex SUA of less than 6.0 mg/dL, 62.2% versus 58.7% (P = 0.072); (3) likelihood of postindex SUA of less than 5.0 mg/dL, 38.9% versus 29.6% (P < 0.001); and (4) decrease in SUA, 1.8 (SD, 2.2) mg/dL versus 0.4 (SD, 1.7) mg/dL (P < 0.001). In multivariable-adjusted analyses, compared with allopurinol continuers, febuxostat switchers had significantly higher likelihood of achieving SUA of less than 6.0 mg/dL (40% higher) and SUA of less than 5.0 mg/dL (83% higher). Conclusions In this “real-world” setting, many patients with gout not surprisingly were not treated with maximum permitted doses of allopurinol. Patients switched to febuxostat were more likely to achieve target SUA levels than those who continued on generally stable doses of allopurinol. PMID:26580304

  9. Molecular surface representation using 3D Zernike descriptors for protein shape comparison and docking.

    PubMed

    Kihara, Daisuke; Sael, Lee; Chikhi, Rayan; Esquivel-Rodriguez, Juan

    2011-09-01

    The tertiary structures of proteins have been solved in an increasing pace in recent years. To capitalize the enormous efforts paid for accumulating the structure data, efficient and effective computational methods need to be developed for comparing, searching, and investigating interactions of protein structures. We introduce the 3D Zernike descriptor (3DZD), an emerging technique to describe molecular surfaces. The 3DZD is a series expansion of mathematical three-dimensional function, and thus a tertiary structure is represented compactly by a vector of coefficients of terms in the series. A strong advantage of the 3DZD is that it is invariant to rotation of target object to be represented. These two characteristics of the 3DZD allow rapid comparison of surface shapes, which is sufficient for real-time structure database screening. In this article, we review various applications of the 3DZD, which have been recently proposed.

  10. Real-time image sequence segmentation using curve evolution

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Liu, Weisong

    2001-04-01

    In this paper, we describe a novel approach to image sequence segmentation and its real-time implementation. This approach uses the 3D structure tensor to produce a more robust frame difference signal and uses curve evolution to extract whole objects. Our algorithm is implemented on a standard PC running the Windows operating system with video capture from a USB camera that is a standard Windows video capture device. Using the Windows standard video I/O functionalities, our segmentation software is highly portable and easy to maintain and upgrade. In its current implementation on a Pentium 400, the system can perform segmentation at 5 frames/sec with a frame resolution of 160 by 120.

  11. [Measurement of left atrial and ventricular volumes in real-time 3D echocardiography. Validation by nuclear magnetic resonance

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Qin, J. X.; White, R. D.; Thomas, J. D.

    2001-01-01

    The measurement of the left ventricular ejection fraction is important for the evaluation of cardiomyopathy and depends on the measurement of left ventricular volumes. There are no existing conventional echocardiographic means of measuring the true left atrial and ventricular volumes without mathematical approximations. The aim of this study was to test anew real time 3-dimensional echocardiographic system of calculating left atrial and ventricular volumes in 40 patients after in vitro validation. The volumes of the left atrium and ventricle acquired from real time 3-D echocardiography in the apical view, were calculated in 7 sections parallel to the surface of the probe and compared with atrial (10 patients) and ventricular (30 patients) volumes calculated by nuclear magnetic resonance with the simpson method and with volumes of water in balloons placed in a cistern. Linear regression analysis showed an excellent correlation between the real volume of water in the balloons and volumes given in real time 3-dimensional echocardiography (y = 0.94x + 5.5, r = 0.99, p < 0.001, D = -10 +/- 4.5 ml). A good correlation was observed between real time 3-dimensional echocardiography and nuclear magnetic resonance for the measurement of left atrial and ventricular volumes (y = 0.95x - 10, r = 0.91, p < 0.001, D = -14.8 +/- 19.5 ml and y = 0.87x + 10, r = 0.98, P < 0.001, D = -8.3 +/- 18.7 ml, respectively. The authors conclude that real time three-dimensional echocardiography allows accurate measurement of left heart volumes underlying the clinical potential of this new 3-D method.

  12. Determination of left ventricular volume, ejection fraction, and myocardial mass by real-time three-dimensional echocardiography

    NASA Technical Reports Server (NTRS)

    Qin, J. X.; Shiota, T.; Thomas, J. D.

    2000-01-01

    Reconstructed three-dimensional (3-D) echocardiography is an accurate and reproducible method of assessing left ventricular (LV) functions. However, it has limitations for clinical study due to the requirement of complex computer and echocardiographic analysis systems, electrocardiographic/respiratory gating, and prolonged imaging times. Real-time 3-D echocardiography has a major advantage of conveniently visualizing the entire cardiac anatomy in three dimensions and of potentially accurately quantifying LV volumes, ejection fractions, and myocardial mass in patients even in the presence of an LV aneurysm. Although the image quality of the current real-time 3-D echocardiographic methods is not optimal, its widespread clinical application is possible because of the convenient and fast image acquisition. We review real-time 3-D echocardiographic image acquisition and quantitative analysis for the evaluation of LV function and LV mass.

  13. Determination of left ventricular volume, ejection fraction, and myocardial mass by real-time three-dimensional echocardiography.

    PubMed

    Qin, J X; Shiota, T; Thomas, J D

    2000-11-01

    Reconstructed three-dimensional (3-D) echocardiography is an accurate and reproducible method of assessing left ventricular (LV) functions. However, it has limitations for clinical study due to the requirement of complex computer and echocardiographic analysis systems, electrocardiographic/respiratory gating, and prolonged imaging times. Real-time 3-D echocardiography has a major advantage of conveniently visualizing the entire cardiac anatomy in three dimensions and of potentially accurately quantifying LV volumes, ejection fractions, and myocardial mass in patients even in the presence of an LV aneurysm. Although the image quality of the current real-time 3-D echocardiographic methods is not optimal, its widespread clinical application is possible because of the convenient and fast image acquisition. We review real-time 3-D echocardiographic image acquisition and quantitative analysis for the evaluation of LV function and LV mass.

  14. Evidence for similar early but not late representation of possible and impossible objects

    PubMed Central

    Freud, Erez; Hadad, Bat-Sheva; Avidan, Galia; Ganel, Tzvi

    2015-01-01

    The perceptual processes that mediate the ability to efficiently represent object 3D structure are still not fully understood. The current study was aimed to shed light on these processes by utilizing spatially possible and impossible objects that could not be created in real 3D space. Despite being perceived as exceptionally unusual, impossible objects still possess fundamental Gestalt attributes and valid local depth cues that may support their initial successful representation. Based on this notion and on recent findings from our lab, we hypothesized that the initial representation of impossible objects would involve common mechanisms to those mediating typical object perception while the perceived differences between possible and impossible objects would emerge later along the processing hierarchy. In Experiment 1, participants preformed same/different classifications of two markers superimposed on a display containing two objects (possible or impossible). Faster reaction times were observed for displays in which the markers were superimposed on the same object (“object-based benefit”). Importantly, this benefit was similar for possible and impossible objects, suggesting that the representations of the two object categories rely on similar perceptual organization processes. Yet, responses for impossible objects were slower compared to possible objects. Experiment 2 was designed to examine the origin of this effect. Participants classified the location of two markers while exposure duration was manipulated. A similar pattern of performance was found for possible and impossible objects for the short exposure duration, with differences in accuracy between these two types of objects emerging only for longer exposure durations. Overall, these findings provide evidence that the representation of object structure relies on a multi-level process and that object impossibility selectively impairs the rendering of fine-detailed description of object structure. PMID:25762949

  15. Non-invasive characterization of real-time bladder sensation using accelerated hydration and a novel sensation meter: An initial experience

    PubMed Central

    Nagle, Anna S.; Speich, John E.; De Wachter, Stefan G.; Ghamarian, Peter P.; Le, David M.; Colhoun, Andrew F.; Ratz, Paul H.; Barbee, Robert W.; Klausner, Adam P.

    2016-01-01

    AIMS The purpose of this investigation was to develop a non-invasive, objective, and unprompted method to characterize real-time bladder sensation. METHODS Volunteers with and without overactive bladder (OAB) were prospectively enrolled in a preliminary accelerated hydration study. Participants drank 2L Gatorade-G2® and recorded real-time sensation (0–100% scale) and standardized verbal sensory thresholds using a novel, touch-screen “sensation meter.” 3D bladder ultrasound images were recorded throughout fillings for a subset of participants. Sensation data were recorded for two consecutive complete fill-void cycles. RESULTS Data from 14 normal and 12 OAB participants were obtained (ICIq-OAB-5a = 0 vs. ≥3). Filling duration decreased in fill2 compared to fill1, but volume did not significantly change. In normals, adjacent verbal sensory thresholds (within fill) showed no overlap, and identical thresholds (between fill) were similar, demonstrating effective differentiation between degrees of %bladder capacity. In OAB, within-fill overlaps and between-fill differences were identified. Real-time %capacity-sensation curves left shifted from fill1 to fill2 in normals, consistent with expected viscoelastic behavior, but unexpectedly right shifted in OAB. 3D ultrasound volume data showed that fill rates started slowly and ramped up with variable end points. CONCLUSIONS This study establishes a non-invasive means to evaluate real-time bladder sensation using a two-fill accelerated hydration protocol and a sensation meter. Verbal thresholds were inconsistent in OAB, and the right shift in OAB %capacity–sensation curve suggests potential biomechanical and/or sensitization changes. This methodology could be used to gain valuable information on different forms of OAB in a completely non-invasive way. PMID:27654469

  16. Non-invasive characterization of real-time bladder sensation using accelerated hydration and a novel sensation meter: An initial experience.

    PubMed

    Nagle, Anna S; Speich, John E; De Wachter, Stefan G; Ghamarian, Peter P; Le, David M; Colhoun, Andrew F; Ratz, Paul H; Barbee, Robert W; Klausner, Adam P

    2017-06-01

    The purpose of this investigation was to develop a non-invasive, objective, and unprompted method to characterize real-time bladder sensation. Volunteers with and without overactive bladder (OAB) were prospectively enrolled in a preliminary accelerated hydration study. Participants drank 2L Gatorade-G2® and recorded real-time sensation (0-100% scale) and standardized verbal sensory thresholds using a novel, touch-screen "sensation meter." 3D bladder ultrasound images were recorded throughout fillings for a subset of participants. Sensation data were recorded for two consecutive complete fill-void cycles. Data from 14 normal and 12 OAB participants were obtained (ICIq-OAB-5a = 0 vs. ≥3). Filling duration decreased in fill2 compared to fill1, but volume did not significantly change. In normals, adjacent verbal sensory thresholds (within fill) showed no overlap, and identical thresholds (between fill) were similar, demonstrating effective differentiation between degrees of %bladder capacity. In OAB, within-fill overlaps and between-fill differences were identified. Real-time %capacity-sensation curves left shifted from fill1 to fill2 in normals, consistent with expected viscoelastic behavior, but unexpectedly right shifted in OAB. 3D ultrasound volume data showed that fill rates started slowly and ramped up with variable end points. This study establishes a non-invasive means to evaluate real-time bladder sensation using a two-fill accelerated hydration protocol and a sensation meter. Verbal thresholds were inconsistent in OAB, and the right shift in OAB %capacity-sensation curve suggests potential biomechanical and/or sensitization changes. This methodology could be used to gain valuable information on different forms of OAB in a completely non-invasive way. © 2016 Wiley Periodicals, Inc.

  17. Generation of large scale urban environments to support advanced sensor and seeker simulation

    NASA Astrophysics Data System (ADS)

    Giuliani, Joseph; Hershey, Daniel; McKeown, David, Jr.; Willis, Carla; Van, Tan

    2009-05-01

    One of the key aspects for the design of a next generation weapon system is the need to operate in cluttered and complex urban environments. Simulation systems rely on accurate representation of these environments and require automated software tools to construct the underlying 3D geometry and associated spectral and material properties that are then formatted for various objective seeker simulation systems. Under an Air Force Small Business Innovative Research (SBIR) contract, we have developed an automated process to generate 3D urban environments with user defined properties. These environments can be composed from a wide variety of source materials, including vector source data, pre-existing 3D models, and digital elevation models, and rapidly organized into a geo-specific visual simulation database. This intermediate representation can be easily inspected in the visible spectrum for content and organization and interactively queried for accuracy. Once the database contains the required contents, it can then be exported into specific synthetic scene generation runtime formats, preserving the relationship between geometry and material properties. To date an exporter for the Irma simulation system developed and maintained by AFRL/Eglin has been created and a second exporter to Real Time Composite Hardbody and Missile Plume (CHAMP) simulation system for real-time use is currently being developed. This process supports significantly more complex target environments than previous approaches to database generation. In this paper we describe the capabilities for content creation for advanced seeker processing algorithms simulation and sensor stimulation, including the overall database compilation process and sample databases produced and exported for the Irma runtime system. We also discuss the addition of object dynamics and viewer dynamics within the visual simulation into the Irma runtime environment.

  18. Interactive Immersive Virtualmuseum: Digital Documentation for Virtual Interaction

    NASA Astrophysics Data System (ADS)

    Clini, P.; Ruggeri, L.; Angeloni, R.; Sasso, M.

    2018-05-01

    Thanks to their playful and educational approach Virtual Museum systems are very effective for the communication of Cultural Heritage. Among the latest technologies Immersive Virtual Reality is probably the most appealing and potentially effective to serve this purpose; nevertheless, due to a poor user-system interaction, caused by an incomplete maturity of a specific technology for museum applications, it is still quite uncommon to find immersive installations in museums. This paper explore the possibilities offered by this technology and presents a workflow that, starting from digital documentation, makes possible an interaction with archaeological finds or any other cultural heritage inside different kinds of immersive virtual reality spaces. Two different cases studies are presented: the National Archaeological Museum of Marche in Ancona and the 3D reconstruction of the Roman Forum of Fanum Fortunae. Two different approaches not only conceptually but also in contents; while the Archaeological Museum is represented in the application simply using spherical panoramas to give the perception of the third dimension, the Roman Forum is a 3D model that allows visitors to move in the virtual space as in the real one. In both cases, the acquisition phase of the artefacts is central; artefacts are digitized with the photogrammetric technique Structure for Motion then they are integrated inside the immersive virtual space using a PC with a HTC Vive system that allows the user to interact with the 3D models turning the manipulation of objects into a fun and exciting experience. The challenge, taking advantage of the latest opportunities made available by photogrammetry and ICT, is to enrich visitors' experience in Real Museum making possible the interaction with perishable, damaged or lost objects and the public access to inaccessible or no longer existing places promoting in this way the preservation of fragile sites.

  19. Real-time 3D transesophageal echocardiography for the evaluation of rheumatic mitral stenosis.

    PubMed

    Schlosshan, Dominik; Aggarwal, Gunjan; Mathur, Gita; Allan, Roger; Cranney, Greg

    2011-06-01

    The aims of this study were: 1) to assess the feasibility and reliability of performing mitral valve area (MVA) measurements in patients with rheumatic mitral valve stenosis (RhMS) using real-time 3-dimensional transesophageal echocardiography (3DTEE) planimetry (MVA(3D)); 2) to compare MVA(3D) with conventional techniques: 2-dimensional (2D) planimetry (MVA(2D)), pressure half-time (MVA(PHT)), and continuity equation (MVA(CON)); and 3) to evaluate the degree of mitral commissural fusion. 3DTEE is a novel technique that provides excellent image quality of the mitral valve. Real-time 3DTEE is a relatively recent enhancement of this technique. To date, there have been no feasibility studies investigating the utility of real-time 3DTEE in the assessment of RhMS. Forty-three consecutive patients referred for echocardiographic evaluation of RhMS and suitability for percutaneous mitral valvuloplasty were assessed using 2D transthoracic echocardiography and real-time 3DTEE. MVA(3D), MVA(2D), MVA(PHT), MVA(CON), and the degree of commissural fusion were evaluated. MVA(3D) assessment was possible in 41 patients (95%). MVA(3D) measurements were significantly lower compared with MVA(2D) (mean difference: -0.16 ± 0.22; n=25, p<0.005) and MVA(PHT) (mean difference: -0.23 ± 0.28 cm(2); n=39, p<0.0001) but marginally greater than MVA(CON) (mean difference: 0.05 ± 0.22 cm(2); n=24, p=0.82). MVA(3D) demonstrated best agreement with MVA(CON) (intraclass correlation coefficient [ICC] 0.83), followed by MVA(2D) (ICC 0.79) and MVA(PHT) (ICC 0.58). Interobserver and intraobserver agreement was excellent for MVA(3D), with ICCs of 0.93 and 0.96, respectively. Excellent commissural evaluation was possible in all patients using 3DTEE. Compared with 3DTEE, underestimation of the degree of commissural fusion using 2D transthoracic echocardiography was observed in 19%, with weak agreement between methods (κ<0.4). MVA planimetry is feasible in the majority of patients with RhMS using 3DTEE, with excellent reproducibility, and compares favorably with established methods. Three-dimensional transesophageal echocardiography allows excellent assessment of commissural fusion. Copyright © 2011 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  20. Application of a multi-beam vibrometer on industrial components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bendel, Karl

    2014-05-27

    Laser Doppler vibrometry is a well proven tool for the non-contact measurement of vibration. The scanning of several measurement points allows to visualize the deflection shape of the component, ideally a 3D-operating deflection shape, if a 3-D scanner is applied. Measuring the points sequentially, however, requires stationary behavior during the measurement time. This cannot be guaranteed for many real objects. Therefore, a multipoint laser Doppler vibrometer has been developed by Polytec and the University of Stuttgart with Bosch as industrial partner. A short description of the measurement system is given. Applications for the parallel measurement of the vibration of severalmore » points are shown for non-stationary vibrating Bosch components such as power-tools or valves.« less

  1. Developing Articulated Human Models from Laser Scan Data for Use as Avatars in Real-Time Networked Virtual Environments

    DTIC Science & Technology

    2001-09-01

    structure model, motion model, physical model, and possibly many other characteristics depending on the application [Ref. 4]. While the film industry has...applications. The film industry relies on this technology almost exclusively, as it is highly reliable under controlled conditions. Since optical tracking...Wavefront. Maya has been used extensively in the film industry to provide lifelike animation, and is adept at handling 3D objects [Ref. 27]. Maya can

  2. An immersive surgery training system with live streaming capability.

    PubMed

    Yang, Yang; Guo, Xinqing; Yu, Zhan; Steiner, Karl V; Barner, Kenneth E; Bauer, Thomas L; Yu, Jingyi

    2014-01-01

    Providing real-time, interactive immersive surgical training has been a key research area in telemedicine. Earlier approaches have mainly adopted videotaped training that can only show imagery from a fixed view point. Recent advances on commodity 3D imaging have enabled a new paradigm for immersive surgical training by acquiring nearly complete 3D reconstructions of actual surgical procedures. However, unlike 2D videotaping that can easily stream data in real-time, by far 3D imaging based solutions require pre-capturing and processing the data; surgical trainings using the data have to be conducted offline after the acquisition. In this paper, we present a new real-time immersive 3D surgical training system. Our solution builds upon the recent multi-Kinect based surgical training system [1] that can acquire and display high delity 3D surgical procedures using only a small number of Microsoft Kinect sensors. We build on top of the system a client-server model for real-time streaming. On the server front, we efficiently fuse multiple Kinect data acquired from different viewpoints and compress and then stream the data to the client. On the client front, we build an interactive space-time navigator to allow remote users (e.g., trainees) to witness the surgical procedure in real-time as if they were present in the room.

  3. Using Geometry-Based Metrics as Part of Fitness-for-Purpose Evaluations of 3D City Models

    NASA Astrophysics Data System (ADS)

    Wong, K.; Ellul, C.

    2016-10-01

    Three-dimensional geospatial information is being increasingly used in a range of tasks beyond visualisation. 3D datasets, however, are often being produced without exact specifications and at mixed levels of geometric complexity. This leads to variations within the models' geometric and semantic complexity as well as the degree of deviation from the corresponding real world objects. Existing descriptors and measures of 3D data such as CityGML's level of detail are perhaps only partially sufficient in communicating data quality and fitness-for-purpose. This study investigates whether alternative, automated, geometry-based metrics describing the variation of complexity within 3D datasets could provide additional relevant information as part of a process of fitness-for-purpose evaluation. The metrics include: mean vertex/edge/face counts per building; vertex/face ratio; minimum 2D footprint area and; minimum feature length. Each metric was tested on six 3D city models from international locations. The results show that geometry-based metrics can provide additional information on 3D city models as part of fitness-for-purpose evaluations. The metrics, while they cannot be used in isolation, may provide a complement to enhance existing data descriptors if backed up with local knowledge, where possible.

  4. An interactive display system for large-scale 3D models

    NASA Astrophysics Data System (ADS)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  5. Virtual Reality Cerebral Aneurysm Clipping Simulation With Real-time Haptic Feedback

    PubMed Central

    Alaraj, Ali; Luciano, Cristian J.; Bailey, Daniel P.; Elsenousi, Abdussalam; Roitberg, Ben Z.; Bernardo, Antonio; Banerjee, P. Pat; Charbel, Fady T.

    2014-01-01

    Background With the decrease in the number of cerebral aneurysms treated surgically and the increase of complexity of those treated surgically, there is a need for simulation-based tools to teach future neurosurgeons the operative techniques of aneurysm clipping. Objective To develop and evaluate the usefulness of a new haptic-based virtual reality (VR) simulator in the training of neurosurgical residents. Methods A real-time sensory haptic feedback virtual reality aneurysm clipping simulator was developed using the Immersive Touch platform. A prototype middle cerebral artery aneurysm simulation was created from a computed tomography angiogram. Aneurysm and vessel volume deformation and haptic feedback are provided in a 3-D immersive VR environment. Intraoperative aneurysm rupture was also simulated. Seventeen neurosurgery residents from three residency programs tested the simulator and provided feedback on its usefulness and resemblance to real aneurysm clipping surgery. Results Residents felt that the simulation would be useful in preparing for real-life surgery. About two thirds of the residents felt that the 3-D immersive anatomical details provided a very close resemblance to real operative anatomy and accurate guidance for deciding surgical approaches. They believed the simulation is useful for preoperative surgical rehearsal and neurosurgical training. One third of the residents felt that the technology in its current form provided very realistic haptic feedback for aneurysm surgery. Conclusion Neurosurgical residents felt that the novel immersive VR simulator is helpful in their training especially since they do not get a chance to perform aneurysm clippings until very late in their residency programs. PMID:25599200

  6. Prediction of Tubal Ectopic Pregnancy Using Offline Analysis of 3-Dimensional Transvaginal Ultrasonographic Data Sets: An Interobserver and Diagnostic Accuracy Study.

    PubMed

    Infante, Fernando; Espada Vaquero, Mercedes; Bignardi, Tommaso; Lu, Chuan; Testa, Antonia C; Fauchon, David; Epstein, Elisabeth; Leone, Francesco P G; Van den Bosch, Thierry; Martins, Wellington P; Condous, George

    2018-06-01

    To assess interobserver reproducibility in detecting tubal ectopic pregnancies by reading data sets from 3-dimensional (3D) transvaginal ultrasonography (TVUS) and comparing it with real-time 2-dimensional (2D) TVUS. Images were initially classified as showing pregnancies of unknown location or tubal ectopic pregnancies on real time 2D TVUS by an experienced sonologist, who acquired 5 3D volumes. Data sets were analyzed offline by 5 observers who had to classify each case as ectopic pregnancy or pregnancy of unknown location. The interobserver reproducibility was evaluated by the Fleiss κ statistic. The performance of each observer in predicting ectopic pregnancies was compared to that of the experienced sonologist. Women were followed until they were reclassified as follows: (1) failed pregnancy of unknown location; (2) intrauterine pregnancy; (3) ectopic pregnancy; or (4) persistent pregnancy of unknown location. Sixty-one women were included. The agreement between reading offline 3D data sets and the first real-time 2D TVUS was very good (80%-82%; κ = 0.89). The overall interobserver agreement among observers reading offline 3D data sets was moderate (κ = 0.52). The diagnostic performance of experienced observers reading offline 3D data sets had accuracy of 78.3% to 85.0%, sensitivity of 66.7% to 81.3%, specificity of 79.5% to 88.4%, positive predictive value of 57.1% to 72.2%, and negative predictive value of 87.5% to 91.3%, compared to the experienced sonologist's real-time 2D TVUS: accuracy of 94.5%, sensitivity of 94.4%, specificity of 94.5%, positive predictive value of 85.0%, and negative predictive value of 98.1%. The diagnostic accuracy of 3D TVUS by reading offline data sets for predicting ectopic pregnancies is dependent on experience. Reading only static 3D data sets without clinical information does not match the diagnostic performance of real time 2D TVUS combined with clinical information obtained during the scan. © 2017 by the American Institute of Ultrasound in Medicine.

  7. High-resolution T1-weighted 3D real IR imaging of the temporal bone using triple-dose contrast material.

    PubMed

    Naganawa, Shinji; Koshikawa, Tokiko; Nakamura, Tatsuya; Fukatsu, Hiroshi; Ishigaki, Takeo; Aoki, Ikuo

    2003-12-01

    The small structures in the temporal bone are surrounded by bone and air. The objectives of this study were (a) to compare contrast-enhanced T1-weighted images acquired by fast spin-echo-based three-dimensional real inversion recovery (3D rIR) against those acquired by gradient echo-based 3D SPGR in the visualization of the enhancement of small structures in the temporal bone, and (b) to determine whether either 3D rIR or 3D SPGR is useful for visualizing enhancement of the cochlear lymph fluid. Seven healthy men (age range 27-46 years) volunteered to participate in this study. All MR imaging was performed using a dedicated bilateral quadrature surface phased-array coil for temporal bone imaging at 1.5 T (Visart EX, Toshiba, Tokyo, Japan). The 3D rIR images (TR/TE/TI: 1800 ms/10 ms/500 ms) and flow-compensated 3D SPGR images (TR/TE/FA: 23 ms/10 ms/25 degrees) were obtained with a reconstructed voxel size of 0.6 x 0.7 x 0.8 mm3. Images were acquired before and 1, 90, 180, and 270 min after the administration of triple-dose Gd-DTPA-BMA (0.3 mmol/kg). In post-contrast MR images, the degree of enhancement of the cochlear aqueduct, endolymphatic sac, subarcuate artery, geniculate ganglion of the facial nerve, and cochlear lymph fluid space was assessed by two radiologists. The degree of enhancement was scored as follows: 0 (no enhancement); 1 (slight enhancement); 2 (intermediate between 1 and 3); and 3 (enhancement similar to that of vessels). Enhancement scores for the endolymphatic sac, subarcuate artery, and geniculate ganglion were higher in 3D rIR than in 3D SPGR. Washout of enhancement in the endolymphatic sac appeared to be delayed compared with that in the subarcuate artery, suggesting that the enhancement in the endolymphatic sac may have been due in part to non-vascular tissue enhancement. Enhancement of the cochlear lymph space was not observed in any of the subjects in 3D rIR and 3D SPGR. The 3D rIR sequence may be more sensitive than the 3D SPGR sequence in visualizing the enhancement of small structures in the temporal bone; however, enhancement of the cochlear fluid space could not be visualized even with 3D rIR, triple-dose contrast, and dedicated coils at 1.5 T.

  8. Detection of hidden objects using a real-time 3-D millimeter-wave imaging system

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon, Avihai; Levanon, Assaf; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, N. S.

    2014-10-01

    Millimeter (mm)and sub-mm wavelengths or terahertz (THz) band have several properties that motivate their use in imaging for security applications such as recognition of hidden objects, dangerous materials, aerosols, imaging through walls as in hostage situations, and also in bad weather conditions. There is no known ionization hazard for biological tissue, and atmospheric degradation of THz radiation is relatively low for practical imaging distances. We recently developed a new technology for the detection of THz radiation. This technology is based on very inexpensive plasma neon indicator lamps, also known as Glow Discharge Detector (GDD), that can be used as very sensitive THz radiation detectors. Using them, we designed and constructed a Focal Plane Array (FPA) and obtained recognizable2-dimensional THz images of both dielectric and metallic objects. Using THz wave it is shown here that even concealed weapons made of dielectric material can be detected. An example is an image of a knife concealed inside a leather bag and also under heavy clothing. Three-dimensional imaging using radar methods can enhance those images since it can allow the isolation of the concealed objects from the body and environmental clutter such as nearby furniture or other people. The GDDs enable direct heterodyning between the electric field of the target signal and the reference signal eliminating the requirement for expensive mixers, sources, and Low Noise Amplifiers (LNAs).We expanded the ability of the FPA so that we are able to obtain recognizable 2-dimensional THz images in real time. We show here that the THz detection of objects in three dimensions, using FMCW principles is also applicable in real time. This imaging system is also shown here to be capable of imaging objects from distances allowing standoff detection of suspicious objects and humans from large distances.

  9. 2D and 3D Traveling Salesman Problem

    ERIC Educational Resources Information Center

    Haxhimusa, Yll; Carpenter, Edward; Catrambone, Joseph; Foldes, David; Stefanov, Emil; Arns, Laura; Pizlo, Zygmunt

    2011-01-01

    When a two-dimensional (2D) traveling salesman problem (TSP) is presented on a computer screen, human subjects can produce near-optimal tours in linear time. In this study we tested human performance on a real and virtual floor, as well as in a three-dimensional (3D) virtual space. Human performance on the real floor is as good as that on a…

  10. The ultrasound brain helmet: early human feasibility study of multiple simultaneous 3D scans of cerebral vasculature

    NASA Astrophysics Data System (ADS)

    Lindsey, Brooks D.; Ivancevich, Nikolas M.; Whitman, John; Light, Edward; Fronheiser, Matthew; Nicoletto, Heather A.; Laskowitz, Daniel T.; Smith, Stephen W.

    2009-02-01

    We describe early stage experiments to test the feasibility of an ultrasound brain helmet to produce multiple simultaneous real-time 3D scans of the cerebral vasculature from temporal and suboccipital acoustic windows of the skull. The transducer hardware and software of the Volumetrics Medical Imaging real-time 3D scanner were modified to support dual 2.5 MHz matrix arrays of 256 transmit elements and 128 receive elements which produce two simultaneous 64° pyramidal scans. The real-time display format consists of two coronal B-mode images merged into a 128° sector, two simultaneous parasagittal images merged into a 128° × 64° C-mode plane, and a simultaneous 64° axial image. Real-time 3D color Doppler images acquired in initial clinical studies after contrast injection demonstrate flow in several representative blood vessels. An offline Doppler rendering of data from two transducers simultaneously scanning via the temporal windows provides an early visualization of the flow in vessels on both sides of the brain. The long-term goal is to produce real-time 3D ultrasound images of the cerebral vasculature from a portable unit capable of internet transmission, thus enabling interactive 3D imaging, remote diagnosis and earlier therapeutic intervention. We are motivated by the urgency for rapid diagnosis of stroke due to the short time window of effective therapeutic intervention.

  11. 3D shape recovery from image focus using Gabor features

    NASA Astrophysics Data System (ADS)

    Mahmood, Fahad; Mahmood, Jawad; Zeb, Ayesha; Iqbal, Javaid

    2018-04-01

    Recovering an accurate and precise depth map from a set of acquired 2-D image dataset of the target object each having different focus information is an ultimate goal of 3-D shape recovery. Focus measure algorithm plays an important role in this architecture as it converts the corresponding color value information into focus information which will be then utilized for recovering depth map. This article introduces Gabor features as focus measure approach for recovering depth map from a set of 2-D images. Frequency and orientation representation of Gabor filter features is similar to human visual system and normally applied for texture representation. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach, in spite of simplicity, generates accurate results.

  12. Architecture for high performance stereoscopic game rendering on Android

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Shetty, Sampath

    2014-03-01

    Stereoscopic gaming is a popular source of content for consumer 3D display systems. There has been a significant shift in the gaming industry towards casual games for mobile devices running on the Android™ Operating System and driven by ARM™ and other low power processors. Such systems are now being integrated directly into the next generation of 3D TVs potentially removing the requirement for an external games console. Although native stereo support has been integrated into some high profile titles on established platforms like Windows PC and PS3 there is a lack of GPU independent 3D support for the emerging Android platform. We describe a framework for enabling stereoscopic 3D gaming on Android for applications on mobile devices, set top boxes and TVs. A core component of the architecture is a 3D game driver, which is integrated into the Android OpenGL™ ES graphics stack to convert existing 2D graphics applications into stereoscopic 3D in real-time. The architecture includes a method of analyzing 2D games and using rule based Artificial Intelligence (AI) to position separate objects in 3D space. We describe an innovative stereo 3D rendering technique to separate the views in the depth domain and render directly into the display buffer. The advantages of the stereo renderer are demonstrated by characterizing the performance in comparison to more traditional render techniques, including depth based image rendering, both in terms of frame rates and impact on battery consumption.

  13. Recognition Of Complex Three Dimensional Objects Using Three Dimensional Moment Invariants

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz A.

    1985-01-01

    A technique for the recognition of complex three dimensional objects is presented. The complex 3-D objects are represented in terms of their 3-D moment invariants, algebraic expressions that remain invariant independent of the 3-D objects' orientations and locations in the field of view. The technique of 3-D moment invariants has been used successfully for simple 3-D object recognition in the past. In this work we have extended this method for the representation of more complex objects. Two complex objects are represented digitally; their 3-D moment invariants have been calculated, and then the invariancy of these 3-D invariant moment expressions is verified by changing the orientation and the location of the objects in the field of view. The results of this study have significant impact on 3-D robotic vision, 3-D target recognition, scene analysis and artificial intelligence.

  14. Perceived crosstalk assessment on patterned retarder 3D display

    NASA Astrophysics Data System (ADS)

    Zou, Bochao; Liu, Yue; Huang, Yi; Wang, Yongtian

    2014-03-01

    CONTEXT: Nowadays, almost all stereoscopic displays suffer from crosstalk, which is one of the most dominant degradation factors of image quality and visual comfort for 3D display devices. To deal with such problems, it is worthy to quantify the amount of perceived crosstalk OBJECTIVE: Crosstalk measurements are usually based on some certain test patterns, but scene content effects are ignored. To evaluate the perceived crosstalk level for various scenes, subjective test may bring a more correct evaluation. However, it is a time consuming approach and is unsuitable for real­ time applications. Therefore, an objective metric that can reliably predict the perceived crosstalk is needed. A correct objective assessment of crosstalk for different scene contents would be beneficial to the development of crosstalk minimization and cancellation algorithms which could be used to bring a good quality of experience to viewers. METHOD: A patterned retarder 3D display is used to present 3D images in our experiment. By considering the mechanism of this kind of devices, an appropriate simulation of crosstalk is realized by image processing techniques to assign different values of crosstalk to each other between image pairs. It can be seen from the literature that the structures of scenes have a significant impact on the perceived crosstalk, so we first extract the differences of the structural information between original and distorted image pairs through Structural SIMilarity (SSIM) algorithm, which could directly evaluate the structural changes between two complex-structured signals. Then the structural changes of left view and right view are computed respectively and combined to an overall distortion map. Under 3D viewing condition, because of the added value of depth, the crosstalk of pop-out objects may be more perceptible. To model this effect, the depth map of a stereo pair is generated and the depth information is filtered by the distortion map. Moreover, human attention is one of important factors for crosstalk assessment due to the fact that when viewing 3D contents, perceptual salient regions are highly likely to be a major contributor to determining the quality of experience of 3D contents. To take this into account, perceptual significant regions are extracted, and a spatial pooling technique is used to combine structural distortion map, depth map and visual salience map together to predict the perceived crosstalk more precisely. To verify the performance of the proposed crosstalk assessment metric, subjective experiments are conducted with 24 participants viewing and rating 60 simuli (5 scenes * 4 crosstalk levels * 3 camera distances). After an outliers removal and statistical process, the correlation with subjective test is examined using Pearson and Spearman rank-order correlation coefficient. Furthermore, the proposed method is also compared with two traditional 2D metrics, PSNR and SSIM. The objective score is mapped to subjective scale using a nonlinear fitting function to directly evaluate the performance of the metric. RESULIS: After the above-mentioned processes, the evaluation results demonstrate that the proposed metric is highly correlated with the subjective score when compared with the existing approaches. Because the Pearson coefficient of the proposed metric is 90.3%, it is promising for objective evaluation of the perceived crosstalk. NOVELTY: The main goal of our paper is to introduce an objective metric for stereo crosstalk assessment. The novelty contributions are twofold. First, an appropriate simulation of crosstalk by considering the characteristics of patterned retarder 3D display is developed. Second, an objective crosstalk metric based on visual attention model is introduced.

  15. Comparison of Uas-Based Photogrammetry Software for 3d Point Cloud Generation: a Survey Over a Historical Site

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2017-11-01

    Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.

  16. Imaging of the 3D dynamics of flagellar beating in human sperm.

    PubMed

    Silva-Villalobos, F; Pimentel, J A; Darszon, A; Corkidi, G

    2014-01-01

    The study of the mechanical and environmental factors that regulate a fundamental event such as fertilization have been subject of multiple studies. Nevertheless, the microscopical size of the spermatozoa and the high beating frequency of their flagella (up to 20 Hz) impose a series of technological challenges for the study of the mechanical factors implicated. Traditionally, due to the inherent characteristics of the rapid sperm movement, and to the technological limitations of microscopes (optical or confocal) to follow in three dimensions (3D) their movement, the analysis of their dynamics has been studied in two dimensions, when the head is confined to a surface. Flagella propel sperm and while their head can be confined to a surface, flagellar movement is not restricted to 2D, always displaying 3D components. In this work, we present a highly novel and useful tool to analyze sperm flagella dynamics in 3D. The basis of the method is a 100 Hz oscillating objective mounted on a bright field optical microscope covering a 16 microns depth space at a rate of ~ 5000 images per second. The best flagellum focused subregions were associated to their respective Z real 3D position. Unprecedented graphical results making evident the 3D movement of the flagella are shown in this work and supplemental material illustrating a 3D animation using the obtained experimental results is also included.

  17. Two-dimensional and 3-D images of thick tissue using time-constrained times-of-flight and absorbance spectrophotometry

    NASA Astrophysics Data System (ADS)

    Benaron, David A.; Lennox, M.; Stevenson, David K.

    1992-05-01

    Reconstructing deep-tissue images in real time using spectrophotometric data from optically diffusing thick tissues has been problematic. Continuous wave applications (e.g., pulse oximetry, regional cerebral saturation) ignore both the multiple paths traveled by the photons through the tissue and the effects of scattering, allowing scalar measurements but only under limited conditions; interferometry works poorly in thick, highly-scattering media; frequency- modulated approaches may not allow full deconvolution of scattering and absorbance; and pulsed-light techniques allow for preservation of information regarding the multiple paths taken by light through the tissue, but reconstruction is both computation intensive and limited by the relative surface area available for detection of photons. We have developed a picosecond times-of-flight and absorbance (TOFA) optical system, time-constrained to measure only photons with a narrow range of path lengths and arriving within a narrow angel of the emitter-detector axis. The delay until arrival of the earliest arriving photons is a function of both the scattering and absorbance of the tissues in a direct line between the emitter and detector, reducing the influence of surrounding tissues. Measurement using a variety of emitter and detector locations produces spatial information which can be analyzed in a standard 2-D grid, or subject to computer reconstruction to produce tomographic images representing 3-D structure. Using such a technique, we have been able to demonstrate the principles of tc-TOFA, detect and localize diffusive and/or absorptive objects suspended in highly scattering media (such as blood admixed with yeast), and perform simple 3-D reconstructions using phantom objects. We are now attempting to obtain images in vivo. Potential future applications include use as a research tool, and as a continuous, noninvasive, nondestructive monitor in diagnostic imaging, fetal monitoring, neurologic and cardiac assessment. The technique may lead to real-time optical imaging and quantitation of tissues oxygen delivery.

  18. A Protocol for Real-time 3D Single Particle Tracking.

    PubMed

    Hou, Shangguo; Welsher, Kevin

    2018-01-03

    Real-time three-dimensional single particle tracking (RT-3D-SPT) has the potential to shed light on fast, 3D processes in cellular systems. Although various RT-3D-SPT methods have been put forward in recent years, tracking high speed 3D diffusing particles at low photon count rates remains a challenge. Moreover, RT-3D-SPT setups are generally complex and difficult to implement, limiting their widespread application to biological problems. This protocol presents a RT-3D-SPT system named 3D Dynamic Photon Localization Tracking (3D-DyPLoT), which can track particles with high diffusive speed (up to 20 µm 2 /s) at low photon count rates (down to 10 kHz). 3D-DyPLoT employs a 2D electro-optic deflector (2D-EOD) and a tunable acoustic gradient (TAG) lens to drive a single focused laser spot dynamically in 3D. Combined with an optimized position estimation algorithm, 3D-DyPLoT can lock onto single particles with high tracking speed and high localization precision. Owing to the single excitation and single detection path layout, 3D-DyPLoT is robust and easy to set up. This protocol discusses how to build 3D-DyPLoT step by step. First, the optical layout is described. Next, the system is calibrated and optimized by raster scanning a 190 nm fluorescent bead with the piezoelectric nanopositioner. Finally, to demonstrate real-time 3D tracking ability, 110 nm fluorescent beads are tracked in water.

  19. Top-down modulation of visual processing and knowledge after 250 ms supports object constancy of category decisions

    PubMed Central

    Schendan, Haline E.; Ganis, Giorgio

    2015-01-01

    People categorize objects more slowly when visual input is highly impoverished instead of optimal. While bottom-up models may explain a decision with optimal input, perceptual hypothesis testing (PHT) theories implicate top-down processes with impoverished input. Brain mechanisms and the time course of PHT are largely unknown. This event-related potential study used a neuroimaging paradigm that implicated prefrontal cortex in top-down modulation of occipitotemporal cortex. Subjects categorized more impoverished and less impoverished real and pseudo objects. PHT theories predict larger impoverishment effects for real than pseudo objects because top-down processes modulate knowledge only for real objects, but different PHT variants predict different timing. Consistent with parietal-prefrontal PHT variants, around 250 ms, the earliest impoverished real object interaction started on an N3 complex, which reflects interactive cortical activity for object cognition. N3 impoverishment effects localized to both prefrontal and occipitotemporal cortex for real objects only. The N3 also showed knowledge effects by 230 ms that localized to occipitotemporal cortex. Later effects reflected (a) word meaning in temporal cortex during the N400, (b) internal evaluation of prior decision and memory processes and secondary higher-order memory involving anterotemporal parts of a default mode network during posterior positivity (P600), and (c) response related activity in posterior cingulate during an anterior slow wave (SW) after 700 ms. Finally, response activity in supplementary motor area during a posterior SW after 900 ms showed impoverishment effects that correlated with RTs. Convergent evidence from studies of vision, memory, and mental imagery which reflects purely top-down inputs, indicates that the N3 reflects the critical top-down processes of PHT. A hybrid multiple-state interactive, PHT and decision theory best explains the visual constancy of object cognition. PMID:26441701

  20. Photogrammetry-Based Automated Measurements for Tooth Shape and Occlusion Analysis

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.; Gaboutchian, A. V.

    2016-06-01

    Tooth measurements (odontometry) are performed for various scientific and practical applications, including dentistry. Present-day techniques are being increasingly based on 3D model use that provides wider prospects in comparison to measurements on real objects: teeth or their plaster copies. The main advantages emerge through application of new measurement methods which provide the needed degree of non-invasiveness, precision, convenience and details. Tooth measurements have been always regarded as a time-consuming research, even more so with use of new methods due to their wider opportunities. This is where automation becomes essential for further development and implication of measurement techniques. In our research automation in obtaining 3D models and automation of measurements provided essential data that was analysed to suggest recommendations for tooth preparation - one of the most responsible clinical procedures in prosthetic dentistry - within a comparatively short period of time. The original photogrammetric 3D reconstruction system allows to generate 3D models of dental arches, reproduce their closure, or occlusion, and to perform a set of standard measurement in automated mode.

  1. Twin imaging phenomenon of integral imaging.

    PubMed

    Hu, Juanmei; Lou, Yimin; Wu, Fengmin; Chen, Aixi

    2018-05-14

    The imaging principles and phenomena of integral imaging technique have been studied in detail using geometrical optics, wave optics, or light filed theory. However, most of the conclusions are only suit for the integral imaging systems using diffused illumination. In this work, a kind of twin imaging phenomenon and mechanism has been observed in a non-diffused illumination reflective integral imaging system. Interactive twin images including a real and a virtual 3D image of one object can be activated in the system. The imaging phenomenon is similar to the conjugate imaging effect of hologram, but it base on the refraction and reflection instead of diffraction. The imaging characteristics and mechanisms different from traditional integral imaging are deduced analytically. Thin film integral imaging systems with 80μm thickness have also been made to verify the imaging phenomenon. Vivid lighting interactive twin 3D images have been realized using a light-emitting diode (LED) light source. When the LED is moving, the twin 3D images are moving synchronously. This interesting phenomenon shows a good application prospect in interactive 3D display, argument reality, and security authentication.

  2. Line segment extraction for large scale unorganized point clouds

    NASA Astrophysics Data System (ADS)

    Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan

    2015-04-01

    Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.

  3. Usefulness of real-time three-dimensional ultrasonography in percutaneous nephrostomy: an animal study.

    PubMed

    Hongzhang, Hong; Xiaojuan, Qin; Shengwei, Zhang; Feixiang, Xiang; Yujie, Xu; Haibing, Xiao; Gallina, Kazobinka; Wen, Ju; Fuqing, Zeng; Xiaoping, Zhang; Mingyue, Ding; Huageng, Liang; Xuming, Zhang

    2018-05-17

    To evaluate the effect of real-time three-dimensional (3D) ultrasonography (US) in guiding percutaneous nephrostomy (PCN). A hydronephrosis model was devised in which the ureters of 16 beagles were obstructed. The beagles were divided equally into groups 1 and 2. In group 1, the PCN was performed using real-time 3D US guidance, while in group 2 the PCN was guided using two-dimensional (2D) US. Visualization of the needle tract, length of puncture time and number of puncture times were recorded for the two groups. In group 1, score for visualization of the needle tract, length of puncture time and number of puncture times were 3, 7.3 ± 3.1 s and one time, respectively. In group 2, the respective results were 1.4 ± 0.5, 21.4 ± 5.8 s and 2.1 ± 0.6 times. The visualization of needle tract in group 1 was superior to that in group 2, and length of puncture time and number of puncture times were both lower in group 1 than in group 2. Real-time 3D US-guided PCN is superior to 2D US-guided PCN in terms of visualization of needle tract and the targeted pelvicalyceal system, leading to quick puncture. Real-time 3D US-guided puncture of the kidney holds great promise for clinical implementation in PCN. © 2018 The Authors BJU International © 2018 BJU International Published by John Wiley & Sons Ltd.

  4. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  5. 3D surface parameterization using manifold learning for medial shape representation

    NASA Astrophysics Data System (ADS)

    Ward, Aaron D.; Hamarneh, Ghassan

    2007-03-01

    The choice of 3D shape representation for anatomical structures determines the effectiveness with which segmentation, visualization, deformation, and shape statistics are performed. Medial axis-based shape representations have attracted considerable attention due to their inherent ability to encode information about the natural geometry of parts of the anatomy. In this paper, we propose a novel approach, based on nonlinear manifold learning, to the parameterization of medial sheets and object surfaces based on the results of skeletonization. For each single-sheet figure in an anatomical structure, we skeletonize the figure, and classify its surface points according to whether they lie on the upper or lower surface, based on their relationship to the skeleton points. We then perform nonlinear dimensionality reduction on the skeleton, upper, and lower surface points, to find the intrinsic 2D coordinate system of each. We then center a planar mesh over each of the low-dimensional representations of the points, and map the meshes back to 3D using the mappings obtained by manifold learning. Correspondence between mesh vertices, established in their intrinsic 2D coordinate spaces, is used in order to compute the thickness vectors emanating from the medial sheet. We show results of our algorithm on real brain and musculoskeletal structures extracted from MRI, as well as an artificial multi-sheet example. The main advantages to this method are its relative simplicity and noniterative nature, and its ability to correctly compute nonintersecting thickness vectors for a medial sheet regardless of both the amount of coincident bending and thickness in the object, and of the incidence of local concavities and convexities in the object's surface.

  6. Virtual 3d City Modeling: Techniques and Applications

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3-D City model is a very useful for various kinds of applications such as for planning in Navigation, Tourism, Disasters Management, Transportations, Municipality, Urban Environmental Managements and Real-estate industry. So the Construction of Virtual 3-D city models is a most interesting research topic in recent years.

  7. Large-Scale Partial-Duplicate Image Retrieval and Its Applications

    DTIC Science & Technology

    2016-04-23

    SECURITY CLASSIFICATION OF: The explosive growth of Internet Media (partial-duplicate/similar images, 3D objects, 3D models, etc.) sheds bright...light on many promising applications in forensics, surveillance, 3D animation, mobile visual search, and 3D model/object search. Compared with the...and stable spatial configuration. Compared with the general 2D objects, 3D models/objects consist of 3D data information (typically a list of

  8. Small Business Innovations

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The PER-Force Handcontroller was originally developed for the International Space Station under a Johnson Space Center Small Business Innovation Research (SBIR) contract. Produced by Cybernet Systems Corporation, the unit is a force-reflecting system that manipulates robots or objects by "feel." The Handcontroller moves in six degrees of freedom, with real and virtual reality forces simulated by a 3-D molecular modeling software package. It is used in molecular modeling in metallurgy applications, satellite docking research, and in research on military unmanned ground vehicles.

  9. MAP3D: a media processor approach for high-end 3D graphics

    NASA Astrophysics Data System (ADS)

    Darsa, Lucia; Stadnicki, Steven; Basoglu, Chris

    1999-12-01

    Equator Technologies, Inc. has used a software-first approach to produce several programmable and advanced VLIW processor architectures that have the flexibility to run both traditional systems tasks and an array of media-rich applications. For example, Equator's MAP1000A is the world's fastest single-chip programmable signal and image processor targeted for digital consumer and office automation markets. The Equator MAP3D is a proposal for the architecture of the next generation of the Equator MAP family. The MAP3D is designed to achieve high-end 3D performance and a variety of customizable special effects by combining special graphics features with high performance floating-point and media processor architecture. As a programmable media processor, it offers the advantages of a completely configurable 3D pipeline--allowing developers to experiment with different algorithms and to tailor their pipeline to achieve the highest performance for a particular application. With the support of Equator's advanced C compiler and toolkit, MAP3D programs can be written in a high-level language. This allows the compiler to successfully find and exploit any parallelism in a programmer's code, thus decreasing the time to market of a given applications. The ability to run an operating system makes it possible to run concurrent applications in the MAP3D chip, such as video decoding while executing the 3D pipelines, so that integration of applications is easily achieved--using real-time decoded imagery for texturing 3D objects, for instance. This novel architecture enables an affordable, integrated solution for high performance 3D graphics.

  10. Design of virtual three-dimensional instruments for sound control

    NASA Astrophysics Data System (ADS)

    Mulder, Axel Gezienus Elith

    An environment for designing virtual instruments with 3D geometry has been prototyped and applied to real-time sound control and design. It enables a sound artist, musical performer or composer to design an instrument according to preferred or required gestural and musical constraints instead of constraints based only on physical laws as they apply to an instrument with a particular geometry. Sounds can be created, edited or performed in real-time by changing parameters like position, orientation and shape of a virtual 3D input device. The virtual instrument can only be perceived through a visualization and acoustic representation, or sonification, of the control surface. No haptic representation is available. This environment was implemented using CyberGloves, Polhemus sensors, an SGI Onyx and by extending a real- time, visual programming language called Max/FTS, which was originally designed for sound synthesis. The extension involves software objects that interface the sensors and software objects that compute human movement and virtual object features. Two pilot studies have been performed, involving virtual input devices with the behaviours of a rubber balloon and a rubber sheet for the control of sound spatialization and timbre parameters. Both manipulation and sonification methods affect the naturalness of the interaction. Informal evaluation showed that a sonification inspired by the physical world appears natural and effective. More research is required for a natural sonification of virtual input device features such as shape, taking into account possible co- articulation of these features. While both hands can be used for manipulation, left-hand-only interaction with a virtual instrument may be a useful replacement for and extension of the standard keyboard modulation wheel. More research is needed to identify and apply manipulation pragmatics and movement features, and to investigate how they are co-articulated, in the mapping of virtual object parameters. While the virtual instruments can be adapted to exploit many manipulation gestures, further work is required to reduce the need for technical expertise to realize adaptations. Better virtual object simulation techniques and faster sensor data acquisition will improve the performance of virtual instruments. The design environment which has been developed should prove useful as a (musical) instrument prototyping tool and as a tool for researching the optimal adaptation of machines to humans.

  11. Biomimetic Graphene-Based 3D Scaffold for Long-Term Cell Culture and Real-Time Electrochemical Monitoring.

    PubMed

    Hu, Xue-Bo; Liu, Yan-Ling; Wang, Wen-Jie; Zhang, Hai-Wei; Qin, Yu; Guo, Shan; Zhang, Xin-Wei; Fu, Lei; Huang, Wei-Hua

    2018-01-16

    Current achievements on electrochemical monitoring of cells are often gained on two-dimensional (2D) substrates, which fail in mimicking the cellular environments and accurately reproducing the cellular functions within a three-dimensional (3D) tissue. In this regard, 3D scaffold concurrently integrated with the function of cell culture and electrochemical sensing is conceivably a promising platform to monitor cells in real time under their in vivo-like 3D microenvironments. However, it is particularly challenging to construct such a multifunctional scaffold platform. Herein, we developed a 3-aminophenylboronic acid (APBA) functionalized graphene foam (GF) network, which combines the biomimetic property of APBA with the mechanical and electrochemical properties of GF. Hence, the GF network can serve as a 3D scaffold to culture cells for a long period with high viability and simultaneously as an electrode for highly sensitive electrochemical sensing. This allows monitoring of gaseous messengers H 2 S released from the cells cultured on the 3D scaffold in real time. This work represents considerable progress in fabricating 3D cell culture scaffold with electrochemical properties, thereby facilitating future studies of physiologically relevant processes.

  12. Comparative studies on gravisensitive protists on ground (2D and 3D clinostats) and in microgravity

    NASA Astrophysics Data System (ADS)

    Hemmersbach, Ruth; Strauch, Sebastian M.; Seibt, Dieter; Schuber, Marianne

    2006-09-01

    In order to prepare and support space experiments, 2D and 3D clinostats are widely applied to study the influence of simulated weightlessness on biological systems. In order to evaluate the results a comparison between the data obtained in simulation experiments and in real microgravity is necessary. We are currently analyzing the gravity-dependent behavior of the protists Paramecium biaurelia (ciliate) and Euglena gracilis (photosynthetic flagellate) on these different experimental platforms. So far, first results are presented concerning the behaviour of Euglena on a 2D fast rotating clinostat and a 3D clinostat as well as under real microgravity conditions (TEXUS sounding rocket flight), of Paramecium on a 2D clinostat and in microgravity. Our data show similar results during 2D and 3D clinorotation compared to real microgravity with respect to loss of orientation (gravitaxis) of Paramecium and Euglena and a decrease of linearity of the cell tracks of Euglena. However, the increase of the mean swimming velocities, especially during 3D clinorotation (Euglena) and 2D clinorotation of Paramecium might indicate a persisting mechanostimulation of the cells. Further studies including long-term 2D and 3D clinostat exposition will enable us to demonstrate the qualification of the applied simulation methods.

  13. Micro axial tomography: A miniaturized, versatile stage device to overcome resolution anisotropy in fluorescence light microscopy

    NASA Astrophysics Data System (ADS)

    Staier, Florian; Eipel, Heinz; Matula, Petr; Evsikov, Alexei V.; Kozubek, Michal; Cremer, Christoph; Hausmann, Michael

    2011-09-01

    With the development of novel fluorescence techniques, high resolution light microscopy has become a challenging technique for investigations of the three-dimensional (3D) micro-cosmos in cells and sub-cellular components. So far, all fluorescence microscopes applied for 3D imaging in biosciences show a spatially anisotropic point spread function resulting in an anisotropic optical resolution or point localization precision. To overcome this shortcoming, micro axial tomography was suggested which allows object tilting on the microscopic stage and leads to an improvement in localization precision and spatial resolution. Here, we present a miniaturized device which can be implemented in a motor driven microscope stage. The footprint of this device corresponds to a standard microscope slide. A special glass fiber can manually be adjusted in the object space of the microscope lens. A stepwise fiber rotation can be controlled by a miniaturized stepping motor incorporated into the device. By means of a special mounting device, test particles were fixed onto glass fibers, optically localized with high precision, and automatically rotated to obtain views from different perspective angles under which distances of corresponding pairs of objects were determined. From these angle dependent distance values, the real 3D distance was calculated with a precision in the ten nanometer range (corresponding here to an optical resolution of 10-30 nm) using standard microscopic equipment. As a proof of concept, the spindle apparatus of a mature mouse oocyte was imaged during metaphase II meiotic arrest under different perspectives. Only very few images registered under different rotation angles are sufficient for full 3D reconstruction. The results indicate the principal advantage of the micro axial tomography approach for many microscopic setups therein and also those of improved resolutions as obtained by high precision localization determination.

  14. High-speed three-dimensional measurements with a fringe projection-based optical sensor

    NASA Astrophysics Data System (ADS)

    Bräuer-Burchardt, Christian; Breitbarth, Andreas; Kühmstedt, Peter; Notni, Gunther

    2014-11-01

    An optical three-dimensional (3-D) sensor based on a fringe projection technique that realizes the acquisition of the surface geometry of small objects was developed for highly resolved and ultrafast measurements. It realizes a data acquisition rate up to 60 high-resolution 3-D datasets per second. The high measurement velocity was achieved by consequent fringe code reduction and parallel data processing. The reduction of the length of the fringe image sequence was obtained by omission of the Gray code sequence using the geometric restrictions of the measurement objects and the geometric constraints of the sensor arrangement. The sensor covers three different measurement fields between 20 mm×20 mm and 40 mm×40 mm with a spatial resolution between 10 and 20 μm, respectively. In order to obtain a robust and fast recalibration of the sensor after change of the measurement field, a calibration procedure based on single shot analysis of a special test object was applied which works with low effort and time. The sensor may be used, e.g., for quality inspection of conductor boards or plugs in real-time industrial applications.

  15. The design and implementation of stereoscopic 3D scalable vector graphics based on WebKit

    NASA Astrophysics Data System (ADS)

    Liu, Zhongxin; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    Scalable Vector Graphics (SVG), which is a language designed based on eXtensible Markup Language (XML), is used to describe basic shapes embedded in webpages, such as circles and rectangles. However, it can only depict 2D shapes. As a consequence, web pages using classical SVG can only display 2D shapes on a screen. With the increasing development of stereoscopic 3D (S3D) technology, binocular 3D devices have been widely used. Under this circumstance, we intend to extend the widely used web rendering engine WebKit to support the description and display of S3D webpages. Therefore, the extension of SVG is of necessity. In this paper, we will describe how to design and implement SVG shapes with stereoscopic 3D mode. Two attributes representing the depth and thickness are added to support S3D shapes. The elimination of hidden lines and hidden surfaces, which is an important process in this project, is described as well. The modification of WebKit is also discussed, which is made to support the generation of both left view and right view at the same time. As is shown in the result, in contrast to the 2D shapes generated by the Google Chrome web browser, the shapes got from our modified browser are in S3D mode. With the feeling of depth and thickness, the shapes seem to be real 3D objects away from the screen, rather than simple curves and lines as before.

  16. Low-Cost 3d Devices and Laser Scanners Comparison for the Application in Orthopedic Centres

    NASA Astrophysics Data System (ADS)

    Redaelli, D. F.; Gonizzi Barsanti, S.; Fraschini, P.; Biffi, E.; Colombo, G.

    2018-05-01

    Low-cost 3D sensors are nowadays widely diffused and many different solutions are available on the market. Some of these devices were developed for entertaining purposes, but are used also for acquisition and processing of different 3D data with the aim of documentation, research and study. Given the fact that these sensors were not developed for this purpose, it is necessary to evaluate their use in the capturing process. This paper shows a preliminary research comparing the Kinect 1 and 2 by Microsoft, the Structure Sensor by Occipital and the O&P Scan by Rodin4D in a medical scenario (i.e. human body scans). In particular, these sensors were compared to Minolta Vivid 9i, chosen as reference because of its higher accuracy. Different test objects were analysed: a calibrated flat plane, for the evaluation of the systematic distance error for each device, and three different parts of a mannequin, used as samples of human body parts. The results showed that the use of a certified flat plane is a good starting point in characterizing the sensors, but a complete analysis with objects similar to the ones of the real context of application is required. For example, the Kinect 2 presented the best results among the low-cost sensors on the flat plane, while the Structure Sensor was more reliable on the mannequin parts.

  17. Evidence-Based Occupational Hearing Screening I: Modeling the Effects of Real-World Noise Environments on the Likelihood of Effective Speech Communication.

    PubMed

    Soli, Sigfrid D; Giguère, Christian; Laroche, Chantal; Vaillancourt, Véronique; Dreschler, Wouter A; Rhebergen, Koenraad S; Harkins, Kevin; Ruckstuhl, Mark; Ramulu, Pradeep; Meyers, Lawrence S

    The objectives of this study were to (1) identify essential hearing-critical job tasks for public safety and law enforcement personnel; (2) determine the locations and real-world noise environments where these tasks are performed; (3) characterize each noise environment in terms of its impact on the likelihood of effective speech communication, considering the effects of different levels of vocal effort, communication distances, and repetition; and (4) use this characterization to define an objective normative reference for evaluating the ability of individuals to perform essential hearing-critical job tasks in noisy real-world environments. Data from five occupational hearing studies performed over a 17-year period for various public safety agencies were analyzed. In each study, job task analyses by job content experts identified essential hearing-critical tasks and the real-world noise environments where these tasks are performed. These environments were visited, and calibrated recordings of each noise environment were made. The extended speech intelligibility index (ESII) was calculated for each 4-sec interval in each recording. These data, together with the estimated ESII value required for effective speech communication by individuals with normal hearing, allowed the likelihood of effective speech communication in each noise environment for different levels of vocal effort and communication distances to be determined. These likelihoods provide an objective norm-referenced and standardized means of characterizing the predicted impact of real-world noise on the ability to perform essential hearing-critical tasks. A total of 16 noise environments for law enforcement personnel and eight noise environments for corrections personnel were analyzed. Effective speech communication was essential to hearing-critical tasks performed in these environments. Average noise levels, ranged from approximately 70 to 87 dBA in law enforcement environments and 64 to 80 dBA in corrections environments. The likelihood of effective speech communication at communication distances of 0.5 and 1 m was often less than 0.50 for normal vocal effort. Likelihood values often increased to 0.80 or more when raised or loud vocal effort was used. Effective speech communication at and beyond 5 m was often unlikely, regardless of vocal effort. ESII modeling of nonstationary real-world noise environments may prove an objective means of characterizing their impact on the likelihood of effective speech communication. The normative reference provided by these measures predicts the extent to which hearing impairments that increase the ESII value required for effective speech communication also decrease the likelihood of effective speech communication. These predictions may provide an objective evidence-based link between the essential hearing-critical job task requirements of public safety and law enforcement personnel and ESII-based hearing assessment of individuals who seek to perform these jobs.

  18. Enhancing Close-Up Image Based 3d Digitisation with Focus Stacking

    NASA Astrophysics Data System (ADS)

    Kontogianni, G.; Chliverou, R.; Koutsoudis, A.; Pavlidis, G.; Georgopoulos, A.

    2017-08-01

    The 3D digitisation of small artefacts is a very complicated procedure because of their complex morphological feature structures, concavities, rich decorations, high frequency of colour changes in texture, increased accuracy requirements etc. Image-based methods present a low cost, fast and effective alternative because laser scanning does not meet the accuracy requirements in general. A shallow Depth of Field (DoF) affects the image-based 3D reconstruction and especially the point matching procedure. This is visible not only in the total number of corresponding points but also in the resolution of the produced 3D model. The extension of the DoF is a very important task that should be incorporated in the data collection to attain a better quality of the image set and a better 3D model. An extension of the DoF can be achieved with many methods and especially with the use of the focus stacking technique. In this paper, the focus stacking technique was tested in a real-world experiment to digitise a museum artefact in 3D. The experiment conditions include the use of a full frame camera equipped with a normal lens (50mm), with the camera being placed close to the object. The artefact has already been digitised with a structured light system and that model served as the reference model in which 3D models were compared and the results were presented.

  19. 3-d interpolation in object perception: evidence from an objective performance paradigm.

    PubMed

    Kellman, Philip J; Garrigan, Patrick; Shipley, Thomas F; Yin, Carol; Machado, Liana

    2005-06-01

    Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D interpolation and tested a new theory of 3-D contour interpolation, termed 3-D relatability. The theory indicates for a given edge which orientations and positions of other edges in space may be connected to it by interpolation. Results of 5 experiments showed that processing of orientation relations in 3-D relatable displays was superior to processing in 3-D nonrelatable displays and that these effects depended on object formation. 3-D interpolation and 3-D relatabilty are discussed in terms of their implications for computational and neural models of object perception, which have typically been based on 2-D-orientation-sensitive units. ((c) 2005 APA, all rights reserved).

  20. 3D visual mechinism by neural networkings

    NASA Astrophysics Data System (ADS)

    Sugiyama, Shigeki

    2007-04-01

    There are some computer vision systems that are available on a market but those are quite far from a real usage of our daily life in a sense of security guard or in a sense of a usage of recognition of a target object behaviour. Because those surroundings' sensing might need to recognize a detail description of an object, like "the distance to an object" and "an object detail figure" and "its figure of edging", which are not possible to have a clear picture of the mechanisms of them with the present recognition system. So for doing this, here studies on mechanisms of how a pair of human eyes can recognize a distance apart, an object edging, and an object in order to get basic essences of vision mechanisms. And those basic mechanisms of object recognition are simplified and are extended logically for applying to a computer vision system. Some of the results of these studies are introduced on this paper.

  1. An automatic panoramic image reconstruction scheme from dental computed tomography images

    PubMed Central

    Papakosta, Thekla K; Savva, Antonis D; Economopoulos, Theodore L; Gröhndal, H G

    2017-01-01

    Objectives: Panoramic images of the jaws are extensively used for dental examinations and/or surgical planning because they provide a general overview of the patient's maxillary and mandibular regions. Panoramic images are two-dimensional projections of three-dimensional (3D) objects. Therefore, it should be possible to reconstruct them from 3D radiographic representations of the jaws, produced by CBCT scanning, obviating the need for additional exposure to X-rays, should there be a need of panoramic views. The aim of this article is to present an automated method for reconstructing panoramic dental images from CBCT data. Methods: The proposed methodology consists of a series of sequential processing stages for detecting a fitting dental arch which is used for projecting the 3D information of the CBCT data to the two-dimensional plane of the panoramic image. The detection is based on a template polynomial which is constructed from a training data set. Results: A total of 42 CBCT data sets of real clinical pre-operative and post-operative representations from 21 patients were used. Eight data sets were used for training the system and the rest for testing. Conclusions: The proposed methodology was successfully applied to CBCT data sets, producing corresponding panoramic images, suitable for examining pre-operatively and post-operatively the patients' maxillary and mandibular regions. PMID:28112548

  2. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  3. [3D modeling of the female pelvis by Computer-Assisted Anatomical Dissection: Applications and perspectives].

    PubMed

    Balaya, V; Uhl, J-F; Lanore, A; Salachas, C; Samoyeau, T; Ngo, C; Bensaid, C; Cornou, C; Rossi, L; Douard, R; Bats, A-S; Lecuru, F; Delmas, V

    2016-05-01

    To achieve a 3D vectorial model of a female pelvis by Computer-Assisted Anatomical Dissection and to assess educationnal and surgical applications. From the database of "visible female" of Visible Human Project(®) (VHP) of the "national library of medicine" NLM (United States), we used 739 transverse anatomical slices of 0.33mm thickness going from L4 to the trochanters. The manual segmentation of each anatomical structures was done with Winsurf(®) software version 4.3. Each anatomical element was built as a separate vectorial object. The whole colored-rendered vectorial model with realistic textures was exported in 3Dpdf format to allow a real time interactive manipulation with Acrobat(®) pro version 11 software. Each element can be handled separately at any transparency, which allows an anatomical learning by systems: skeleton, pelvic organs, urogenital system, arterial and venous vascularization. This 3D anatomical model can be used as data bank to teach of the fundamental anatomy. This 3D vectorial model, realistic and interactive constitutes an efficient educational tool for the teaching of the anatomy of the pelvis. 3D printing of the pelvis is possible with the new printers. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  4. Virtual reality technique to assist measurement of degree of shaking of two minarets of an ancient building

    NASA Astrophysics Data System (ADS)

    Homainejad, Amir S.; Satari, Mehran

    2000-05-01

    VR is possible which brings users to the reality by computer and VE is a simulated world which takes users to any points and directions of the object. VR and VE can be very useful if accurate and precise data are sued, and allows users to work with realistic model. Photogrammetry is a technique which is able to collect and provide accurate and precise data for building 3D model in a computer. Data can be collected from various sensor and cameras, and methods of data collector are vary based on the method of image acquiring. Indeed VR includes real-time graphics, 3D model, and display and it has application in the entertainment industry, flight simulators, industrial design.

  5. Development of monograph titled "augmented chemistry aldehida & keton" with 3 dimensional (3D) illustration as a supplement book on chemistry learning

    NASA Astrophysics Data System (ADS)

    Damayanti, Latifah Adelina; Ikhsan, Jaslin

    2017-05-01

    Integration of information technology in education more rapidly performed in a medium of learning. Three-dimensional (3D) molecular modeling was performed in Augmented Reality as a tangible manifestation of increasingly modern technology utilization. Based on augmented reality, three-dimensional virtual object is projected in real time and the exact environment. This paper reviewed the uses of chemical learning supplement book of aldehydes and ketones which are equipped with three-dimensional molecular modeling by which students can inspect molecules from various viewpoints. To plays the 3D illustration printed on the book, smartphones with the open-source software of the technology based integrated Augmented Reality can be used. The aims of this research were to develop the monograph of aldehydes and ketones with 3 dimensional (3D) illustrations, to determine the specification of the monograph, and to determine the quality of the monograph. The quality of the monograph is evaluated by experiencing chemistry teachers on the five aspects of contents/materials, presentations, language and images, graphs, and software engineering, resulted in the result that the book has a very good quality to be used as a chemistry learning supplement book.

  6. 3DVEM Software Modules for Efficient Management of Point Clouds and Photorealistic 3d Models

    NASA Astrophysics Data System (ADS)

    Fabado, S.; Seguí, A. E.; Cabrelles, M.; Navarro, S.; García-De-San-Miguel, D.; Lerma, J. L.

    2013-07-01

    Cultural heritage managers in general and information users in particular are not usually used to deal with high-technological hardware and software. On the contrary, information providers of metric surveys are most of the times applying latest developments for real-life conservation and restoration projects. This paper addresses the software issue of handling and managing either 3D point clouds or (photorealistic) 3D models to bridge the gap between information users and information providers as regards the management of information which users and providers share as a tool for decision-making, analysis, visualization and management. There are not many viewers specifically designed to handle, manage and create easily animations of architectural and/or archaeological 3D objects, monuments and sites, among others. 3DVEM - 3D Viewer, Editor & Meter software will be introduced to the scientific community, as well as 3DVEM - Live and 3DVEM - Register. The advantages of managing projects with both sets of data, 3D point cloud and photorealistic 3D models, will be introduced. Different visualizations of true documentation projects in the fields of architecture, archaeology and industry will be presented. Emphasis will be driven to highlight the features of new userfriendly software to manage virtual projects. Furthermore, the easiness of creating controlled interactive animations (both walkthrough and fly-through) by the user either on-the-fly or as a traditional movie file will be demonstrated through 3DVEM - Live.

  7. A cognitive approach to vision for a mobile robot

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Funk, Christopher; Lyons, Damian

    2013-05-01

    We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both static and moving objects.

  8. Samba: a real-time motion capture system using wireless camera sensor networks.

    PubMed

    Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai

    2014-03-20

    There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments.

  9. Samba: A Real-Time Motion Capture System Using Wireless Camera Sensor Networks

    PubMed Central

    Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai

    2014-01-01

    There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments. PMID:24658618

  10. a Low-Cost and Lightweight 3d Interactive Real Estate-Purposed Indoor Virtual Reality Application

    NASA Astrophysics Data System (ADS)

    Ozacar, K.; Ortakci, Y.; Kahraman, I.; Durgut, R.; Karas, I. R.

    2017-11-01

    Interactive 3D architectural indoor design have been more popular after it benefited from Virtual Reality (VR) technologies. VR brings computer-generated 3D content to real life scale and enable users to observe immersive indoor environments so that users can directly modify it. This opportunity enables buyers to purchase a property off-the-plan cheaper through virtual models. Instead of showing property through 2D plan or renders, this visualized interior architecture of an on-sale unbuilt property is demonstrated beforehand so that the investors have an impression as if they were in the physical building. However, current applications either use highly resource consuming software, or are non-interactive, or requires specialist to create such environments. In this study, we have created a real-estate purposed low-cost high quality fully interactive VR application that provides a realistic interior architecture of the property by using free and lightweight software: Sweet Home 3D and Unity. A preliminary study showed that participants generally liked proposed real estate-purposed VR application, and it satisfied the expectation of the property buyers.

  11. Using the stereokinetic effect to convey depth - Computationally efficient depth-from-motion displays

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Proffitt, Dennis R.

    1992-01-01

    Recent developments in microelectronics have encouraged the use of 3D data bases to create compelling volumetric renderings of graphical objects. However, even with the computational capabilities of current-generation graphical systems, real-time displays of such objects are difficult, particularly when dynamic spatial transformations are involved. In this paper we discuss a type of visual stimulus (the stereokinetic effect display) that is computationally far less complex than a true three-dimensional transformation but yields an equally compelling depth impression, often perceptually indiscriminable from the true spatial transformation. Several possible applications for this technique are discussed (e.g., animating contour maps and air traffic control displays so as to evoke accurate depth percepts).

  12. Removal of jitter noise in 3D shape recovery from image focus by using Kalman filter.

    PubMed

    Jang, Hoon-Seok; Muhammad, Mannan Saeed; Choi, Tae-Sun

    2018-02-01

    In regard to Shape from Focus, one critical factor impacting system application is mechanical vibration of the translational stage causing jitter noise along the optical axis. This noise is not detectable by simply observing the image. However, when focus measures are applied, inaccuracies in the depth occur. In this article, jitter noise and focus curves are modeled by Gaussian distribution and quadratic function, respectively. Then Kalman filter is designed and applied to eliminate this noise in the focus curves, as a post-processing step after the focus measure application. Experiments are implemented with simulated objects and real objects to show usefulness of proposed algorithm. © 2017 Wiley Periodicals, Inc.

  13. Muon tomography imaging algorithms for nuclear threat detection inside large volume containers with the Muon Portal detector

    NASA Astrophysics Data System (ADS)

    Riggi, S.; Antonuccio-Delogu, V.; Bandieramonte, M.; Becciani, U.; Costa, A.; La Rocca, P.; Massimino, P.; Petta, C.; Pistagna, C.; Riggi, F.; Sciacca, E.; Vitello, F.

    2013-11-01

    Muon tomographic visualization techniques try to reconstruct a 3D image as close as possible to the real localization of the objects being probed. Statistical algorithms under test for the reconstruction of muon tomographic images in the Muon Portal Project are discussed here. Autocorrelation analysis and clustering algorithms have been employed within the context of methods based on the Point Of Closest Approach (POCA) reconstruction tool. An iterative method based on the log-likelihood approach was also implemented. Relative merits of all such methods are discussed, with reference to full GEANT4 simulations of different scenarios, incorporating medium and high-Z objects inside a container.

  14. 3D annotation and manipulation of medical anatomical structures

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Schaller, Christian; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    Although the medical scanners are rapidly moving towards a three-dimensional paradigm, the manipulation and annotation/labeling of the acquired data is still performed in a standard 2D environment. Editing and annotation of three-dimensional medical structures is currently a complex task and rather time-consuming, as it is carried out in 2D projections of the original object. A major problem in 2D annotation is the depth ambiguity, which requires 3D landmarks to be identified and localized in at least two of the cutting planes. Operating directly in a three-dimensional space enables the implicit consideration of the full 3D local context, which significantly increases accuracy and speed. A three-dimensional environment is as well more natural optimizing the user's comfort and acceptance. The 3D annotation environment requires the three-dimensional manipulation device and display. By means of two novel and advanced technologies, Wii Nintendo Controller and Philips 3D WoWvx display, we define an appropriate 3D annotation tool and a suitable 3D visualization monitor. We define non-coplanar setting of four Infrared LEDs with a known and exact position, which are tracked by the Wii and from which we compute the pose of the device by applying a standard pose estimation algorithm. The novel 3D renderer developed by Philips uses either the Z-value of a 3D volume, or it computes the depth information out of a 2D image, to provide a real 3D experience without having some special glasses. Within this paper we present a new framework for manipulation and annotation of medical landmarks directly in three-dimensional volume.

  15. Systems and Methods for Data Visualization Using Three-Dimensional Displays

    NASA Technical Reports Server (NTRS)

    Davidoff, Scott (Inventor); Djorgovski, Stanislav G. (Inventor); Estrada, Vicente (Inventor); Donalek, Ciro (Inventor)

    2017-01-01

    Data visualization systems and methods for generating 3D visualizations of a multidimensional data space are described. In one embodiment a 3D data visualization application directs a processing system to: load a set of multidimensional data points into a visualization table; create representations of a set of 3D objects corresponding to the set of data points; receive mappings of data dimensions to visualization attributes; determine the visualization attributes of the set of 3D objects based upon the selected mappings of data dimensions to 3D object attributes; update a visibility dimension in the visualization table for each of the plurality of 3D object to reflect the visibility of each 3D object based upon the selected mappings of data dimensions to visualization attributes; and interactively render 3D data visualizations of the 3D objects within the virtual space from viewpoints determined based upon received user input.

  16. Precipitation processes developed during TOGA COARE (1992), GATE (1974), SCSMEX (1998), and KWAJEX (1999): 3D Cloud Resolving Model Simulation

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.

    2006-01-01

    Real clouds and cloud systems are inherently three-dimensional (3D). Because of the limitations in computer resources, however, most cloud-resolving models (CRMs) today are still two-dimensional (2D). A few 3D CRMs have been used to study the response of clouds to large-scale forcing. In these 3D simulations, the model domain was small, and the integration time was 6 hours. Only recently have 3D experiments been performed for multi-day periods for tropical cloud systems with large horizontal domains at the National Center for Atmospheric Research (NCAR), NOAA GFDL, the U.K. Met. Office, Colorado State University and NASA Goddard Space Flight Center. An improved 3D Goddard Cumulus Ensemble (GCE) model was recently used to simulate periods during TOGA COARE (December 19-27, 1992), GATE (september 1-7, 1974), SCSMEX (May 18-26, June 2-11, 1998) and KWAJEX (August 7-13, August 18-21, and August 29-September 12, 1999) using a 512 by 512 km domain and 41 vertical layers. The major objectives of this paper are: (1) to identify the differences and similarities in the simulated precipitation processes and their associated surface and water energy budgets in TOGA COARE, GATE, KWAJEX, and SCSMEX, and (2) to asses the impact of microphysics, radiation budget and surface fluxes on the organization of convection in tropics.

  17. Conception and realization of a semiconductor based 240 GHz full 3D MIMO imaging system

    NASA Astrophysics Data System (ADS)

    Weisenstein, Christian; Kahl, Matthias; Friederich, Fabian; Haring Bolívar, Peter

    2017-02-01

    Multiple-input multiple-output (MIMO) imaging systems in the terahertz frequency range have a high potential in the field of non-destructive testing (NDT). With such systems it is possible to detect defects in composite materials, for example cracks or delaminations in fiber composites. To investigate mass-produced products it is necessary to study the objects in close to real-time on a conveyor without affecting the production cycle time. In this work we present the conception and realization of a 3D MIMO imaging system for in-line investigation of composite materials and structures. To achieve a lateral resolution of 1 mm, in order to detect such small defects in composite materials with a moderate number of elements, precise sensor design is crucial. In our approach we use the effective aperture concept. The designed sparse array consists of 32 transmitters and 30 receivers based on planar semiconductor components. High range resolution is achieved by an operating frequency between 220 GHz and 260 GHz in a stepped frequency continuous wave (SFCW) setup. A matched filter approach is used to simulate the reconstructed 3D image through the array. This allows the evaluation of the designed array geometry in regard of resolution and side lobe level. In contrast to earlier demonstrations, in which synthetic reconstruction is only performed in a 2D plane, an optics-free full 3D recon- struction has been implemented in our concept. Based on this simulation we designed an array geometry that enables to resolve objects with a resolution smaller than 1mm and moderate side lobe level.

  18. Geospatial database for heritage building conservation

    NASA Astrophysics Data System (ADS)

    Basir, W. N. F. W. A.; Setan, H.; Majid, Z.; Chong, A.

    2014-02-01

    Heritage buildings are icons from the past that exist in present time. Through heritage architecture, we can learn about economic issues and social activities of the past. Nowadays, heritage buildings are under threat from natural disaster, uncertain weather, pollution and others. In order to preserve this heritage for the future generation, recording and documenting of heritage buildings are required. With the development of information system and data collection technique, it is possible to create a 3D digital model. This 3D information plays an important role in recording and documenting heritage buildings. 3D modeling and virtual reality techniques have demonstrated the ability to visualize the real world in 3D. It can provide a better platform for communication and understanding of heritage building. Combining 3D modelling with technology of Geographic Information System (GIS) will create a database that can make various analyses about spatial data in the form of a 3D model. Objectives of this research are to determine the reliability of Terrestrial Laser Scanning (TLS) technique for data acquisition of heritage building and to develop a geospatial database for heritage building conservation purposes. The result from data acquisition will become a guideline for 3D model development. This 3D model will be exported to the GIS format in order to develop a database for heritage building conservation. In this database, requirements for heritage building conservation process are included. Through this research, a proper database for storing and documenting of the heritage building conservation data will be developed.

  19. Localized Segment Based Processing for Automatic Building Extraction from LiDAR Data

    NASA Astrophysics Data System (ADS)

    Parida, G.; Rajan, K. S.

    2017-05-01

    The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caillet, V; Colvill, E; Royal North Shore Hospital, Sydney, NSW

    Purpose: The objective of this study was to investigate the dosimetric benefits of multi-leaf collimator (MLC) tracking for lung SABR treatments in end-to-end clinically realistic planning and delivery scenarios. Methods: The clinical benefits of MLC tracking were assessed using previously delivered treatment plans and physical experiments. The 10 most recent single lesion lung SABR patients were re-planned following a 4D-GTV-based real-time adaptive protocol (PTV defined as the end-of-exhalation GTV plus 5.0 mm margins). The plans were delivered on a Trilogy Varian linac. Electromagnetic transponders (Calypso, Varian Medical Systems, USA) were embedded into a programmable moving phantom (HexaMotion platform) tracked withmore » the Varian Calypso system. For each physical experiment, the MLC positions were collected and used as input for dose reconstruction. For both planned and physical experiments, the OAR dose metrics from the conventional and real-time adaptive SABR plans (Mean Lung Dose (MLD), V20 for lung, and near-maximum dose (D2%) for spine and heart) were statistically compared. The Wilcoxon test was used to compare plan and physical experiment dose metrics. Results: While maintaining target coverage, percentage reductions in dose metrics to the OARs were observed for both planned and physical experiments. Comparing the two plans showed MLD percentage reduction (MLDr) of 25.4% (absolute differences of 1.41 Gy) and 28.9% (1.29%) for the V20r. D2% percentage reduction for spine and heart were respectively 27.9% (0.3 Gy) and 20.2% (0.3 Gy). For the physical experiments, MLDr was 23.9% (1.3 Gy), and V20r 37.4% (1.6%). D2% reduction for spine and heart were respectively 27.3% (0.3 Gy) and 19.6% (0.3 Gy). For both plans and physical experiments, significant OAR dose differences (p<0.05) were found between the conventional SABR and real-time adaptive plans. Conclusion: Application of MLC tracking for lung SABR patients has the potential to reduce the dose to OARs during radiation therapy.« less

  1. INTELLIGENT MONITORING SYSTEM WITH HIGH TEMPERATURE DISTRIBUTED FIBEROPTIC SENSOR FOR POWER PLANT COMBUSTION PROCESSES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwang Y. Lee; Stuart S. Yin; Andre Boheman

    2004-12-26

    The objective of the proposed work is to develop an intelligent distributed fiber optical sensor system for real-time monitoring of high temperature in a boiler furnace in power plants. Of particular interest is the estimation of spatial and temporal distributions of high temperatures within a boiler furnace, which will be essential in assessing and controlling the mechanisms that form and remove pollutants at the source, such as NOx. The basic approach in developing the proposed sensor system is three fold: (1) development of high temperature distributed fiber optical sensor capable of measuring temperatures greater than 2000 C degree with spatialmore » resolution of less than 1 cm; (2) development of distributed parameter system (DPS) models to map the three-dimensional (3D) temperature distribution for the furnace; and (3) development of an intelligent monitoring system for real-time monitoring of the 3D boiler temperature distribution. Under Task 1, improvement was made on the performance of in-fiber grating fabricated in single crystal sapphire fibers, test was performed on the grating performance of single crystal sapphire fiber with new fabrication methods, and the fabricated grating was applied to high temperature sensor. Under Task 2, models obtained from 3-D modeling of the Demonstration Boiler were used to study relationships between temperature and NOx, as the multi-dimensionality of such systems are most comparable with real-life boiler systems. Studies show that in boiler systems with no swirl, the distributed temperature sensor may provide information sufficient to predict trends of NOx at the boiler exit. Under Task 3, we investigate a mathematical approach to extrapolation of the temperature distribution within a power plant boiler facility, using a combination of a modified neural network architecture and semigroup theory. The 3D temperature data is furnished by the Penn State Energy Institute using FLUENT. Given a set of empirical data with no analytic expression, we first develop an analytic description and then extend that model along a single axis.« less

  2. Real-time structured light intraoral 3D measurement pipeline

    NASA Astrophysics Data System (ADS)

    Gheorghe, Radu; Tchouprakov, Andrei; Sokolov, Roman

    2013-02-01

    Computer aided design and manufacturing (CAD/CAM) is increasingly becoming a standard feature and service provided to patients in dentist offices and denture manufacturing laboratories. Although the quality of the tools and data has slowly improved in the last years, due to various surface measurement challenges, practical, accurate, invivo, real-time 3D high quality data acquisition and processing still needs improving. Advances in GPU computational power have allowed for achieving near real-time 3D intraoral in-vivo scanning of patient's teeth. We explore in this paper, from a real-time perspective, a hardware-software-GPU solution that addresses all the requirements mentioned before. Moreover we exemplify and quantify the hard and soft deadlines required by such a system and illustrate how they are supported in our implementation.

  3. A meshless EFG-based algorithm for 3D deformable modeling of soft tissue in real-time.

    PubMed

    Abdi, Elahe; Farahmand, Farzam; Durali, Mohammad

    2012-01-01

    The meshless element-free Galerkin method was generalized and an algorithm was developed for 3D dynamic modeling of deformable bodies in real time. The efficacy of the algorithm was investigated in a 3D linear viscoelastic model of human spleen subjected to a time-varying compressive force exerted by a surgical grasper. The model remained stable in spite of the considerably large deformations occurred. There was a good agreement between the results and those of an equivalent finite element model. The computational cost, however, was much lower, enabling the proposed algorithm to be effectively used in real-time applications.

  4. Finger tracking for hand-held device interface using profile-matching stereo vision

    NASA Astrophysics Data System (ADS)

    Chang, Yung-Ping; Lee, Dah-Jye; Moore, Jason; Desai, Alok; Tippetts, Beau

    2013-01-01

    Hundreds of millions of people use hand-held devices frequently and control them by touching the screen with their fingers. If this method of operation is being used by people who are driving, the probability of deaths and accidents occurring substantially increases. With a non-contact control interface, people do not need to touch the screen. As a result, people will not need to pay as much attention to their phones and thus drive more safely than they would otherwise. This interface can be achieved with real-time stereovision. A novel Intensity Profile Shape-Matching Algorithm is able to obtain 3-D information from a pair of stereo images in real time. While this algorithm does have a trade-off between accuracy and processing speed, the result of this algorithm proves the accuracy is sufficient for the practical use of recognizing human poses and finger movement tracking. By choosing an interval of disparity, an object at a certain distance range can be segmented. In other words, we detect the object by its distance to the cameras. The advantage of this profile shape-matching algorithm is that detection of correspondences relies on the shape of profile and not on intensity values, which are subjected to lighting variations. Based on the resulting 3-D information, the movement of fingers in space from a specific distance can be determined. Finger location and movement can then be analyzed for non-contact control of hand-held devices.

  5. Active contour configuration model for estimating the posterior ablative margin in image fusion of real-time ultrasound and 3D ultrasound or magnetic resonance images for radiofrequency ablation: an experimental study.

    PubMed

    Lee, Junkyo; Lee, Min Woo; Choi, Dongil; Cha, Dong Ik; Lee, Sunyoung; Kang, Tae Wook; Yang, Jehoon; Jo, Jaemoon; Bang, Won-Chul; Kim, Jongsik; Shin, Dongkuk

    2017-12-21

    The purpose of this study was to evaluate the accuracy of an active contour model for estimating the posterior ablative margin in images obtained by the fusion of real-time ultrasonography (US) and 3-dimensional (3D) US or magnetic resonance (MR) images of an experimental tumor model for radiofrequency ablation. Chickpeas (n=12) and bovine rump meat (n=12) were used as an experimental tumor model. Grayscale 3D US and T1-weighted MR images were pre-acquired for use as reference datasets. US and MR/3D US fusion was performed for one group (n=4), and US and 3D US fusion only (n=8) was performed for the other group. Half of the models in each group were completely ablated, while the other half were incompletely ablated. Hyperechoic ablation areas were extracted using an active contour model from real-time US images, and the posterior margin of the ablation zone was estimated from the anterior margin. After the experiments, the ablated pieces of bovine rump meat were cut along the electrode path and the cut planes were photographed. The US images with the estimated posterior margin were compared with the photographs and post-ablation MR images. The extracted contours of the ablation zones from 12 US fusion videos and post-ablation MR images were also matched. In the four models fused under real-time US with MR/3D US, compression from the transducer and the insertion of an electrode resulted in misregistration between the real-time US and MR images, making the estimation of the ablation zones less accurate than was achieved through fusion between real-time US and 3D US. Eight of the 12 post-ablation 3D US images were graded as good when compared with the sectioned specimens, and 10 of the 12 were graded as good in a comparison with nicotinamide adenine dinucleotide staining and histopathologic results. Estimating the posterior ablative margin using an active contour model is a feasible way of predicting the ablation area, and US/3D US fusion was more accurate than US/MR fusion.

  6. Strain measurement of abdominal aortic aneurysm with real-time 3D ultrasound speckle tracking.

    PubMed

    Bihari, P; Shelke, A; Nwe, T H; Mularczyk, M; Nelson, K; Schmandra, T; Knez, P; Schmitz-Rixen, T

    2013-04-01

    Abdominal aortic aneurysm rupture is caused by mechanical vascular tissue failure. Although mechanical properties within the aneurysm vary, currently available ultrasound methods assess only one cross-sectional segment of the aorta. This study aims to establish real-time 3-dimensional (3D) speckle tracking ultrasound to explore local displacement and strain parameters of the whole abdominal aortic aneurysm. Validation was performed on a silicone aneurysm model, perfused in a pulsatile artificial circulatory system. Wall motion of the silicone model was measured simultaneously with a commercial real-time 3D speckle tracking ultrasound system and either with laser-scan micrometry or with video photogrammetry. After validation, 3D ultrasound data were collected from abdominal aortic aneurysms of five patients and displacement and strain parameters were analysed. Displacement parameters measured in vitro by 3D ultrasound and laser scan micrometer or video analysis were significantly correlated at pulse pressures between 40 and 80 mmHg. Strong local differences in displacement and strain were identified within the aortic aneurysms of patients. Local wall strain of the whole abdominal aortic aneurysm can be analysed in vivo with real-time 3D ultrasound speckle tracking imaging, offering the prospect of individual non-invasive rupture risk analysis of abdominal aortic aneurysms. Copyright © 2013 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  7. GSMS and space views: Advanced spacecraft monitoring tools

    NASA Technical Reports Server (NTRS)

    Carlton, Douglas; Vaules, David, Jr.; Mandl, Daniel

    1993-01-01

    The Graphical Spacecraft Monitoring System (GSMS) processes and translates real-time telemetry data from the Gamma Ray Observatory (GRO) spacecraft into high resolution 2-D and 3-D color displays showing the spacecraft's position relative to the Sun, Earth, Moon, and stars, its predicted orbit path, its attitude, instrument field of views, and other items of interest to the GRO Flight Operations Team (FOT). The GSMS development project is described and the approach being undertaken for implementing Space Views, the next version of GSMS, is presented. Space Views is an object-oriented graphical spacecraft monitoring system that will become a standard component of Goddard Space Flight Center's Transportable Payload Operations Control Center (TPOCC).

  8. Using a 3D virtual supermarket to measure food purchase behavior: a validation study.

    PubMed

    Waterlander, Wilma Elzeline; Jiang, Yannan; Steenhuis, Ingrid Hendrika Margaretha; Ni Mhurchu, Cliona

    2015-04-28

    There is increasing recognition that supermarkets are an important environment for health-promoting interventions such as fiscal food policies or front-of-pack nutrition labeling. However, due to the complexities of undertaking such research in the real world, well-designed randomized controlled trials on these kinds of interventions are lacking. The Virtual Supermarket is a 3-dimensional computerized research environment designed to enable experimental studies in a supermarket setting without the complexity or costs normally associated with undertaking such research. The primary objective was to validate the Virtual Supermarket by comparing virtual and real-life food purchasing behavior. A secondary objective was to obtain participant feedback on perceived sense of "presence" (the subjective experience of being in one place or environment even if physically located in another) in the Virtual Supermarket. Eligible main household shoppers (New Zealand adults aged ≥18 years) were asked to conduct 3 shopping occasions in the Virtual Supermarket over 3 consecutive weeks, complete the validated Presence Questionnaire Items Stems, and collect their real supermarket grocery till receipts for that same period. Proportional expenditure (NZ$) and the proportion of products purchased over 18 major food groups were compared between the virtual and real supermarkets. Data were analyzed using repeated measures mixed models. A total of 123 participants consented to take part in the study. In total, 69.9% (86/123) completed 1 shop in the Virtual Supermarket, 64.2% (79/123) completed 2 shops, 60.2% (74/123) completed 3 shops, and 48.8% (60/123) returned their real supermarket till receipts. The 4 food groups with the highest relative expenditures were the same for the virtual and real supermarkets: fresh fruit and vegetables (virtual estimate: 14.3%; real: 17.4%), bread and bakery (virtual: 10.0%; real: 8.2%), dairy (virtual: 19.1%; real: 12.6%), and meat and fish (virtual: 16.5%; real: 16.8%). Significant differences in proportional expenditures were observed for 6 food groups, with largest differences (virtual - real) for dairy (in expenditure 6.5%, P<.001; in items 2.2%, P=.04) and fresh fruit and vegetables (in expenditure: -3.1%, P=.04; in items: 5.9%, P=.002). There was no trend of overspending in the Virtual Supermarket and participants experienced a medium-to-high presence (88%, 73/83 scored medium; 8%, 7/83 scored high). Shopping patterns in the Virtual Supermarket were comparable to those in real life. Overall, the Virtual Supermarket is a valid tool to measure food purchasing behavior. Nevertheless, it is important to improve the functionality of some food categories, in particular fruit and vegetables and dairy. The results of this validation will assist in making further improvements to the software and with optimization of the internal and external validity of this innovative methodology.

  9. Real-time 3D human pose recognition from reconstructed volume via voxel classifiers

    NASA Astrophysics Data System (ADS)

    Yoo, ByungIn; Choi, Changkyu; Han, Jae-Joon; Lee, Changkyo; Kim, Wonjun; Suh, Sungjoo; Park, Dusik; Kim, Junmo

    2014-03-01

    This paper presents a human pose recognition method which simultaneously reconstructs a human volume based on ensemble of voxel classifiers from a single depth image in real-time. The human pose recognition is a difficult task since a single depth camera can capture only visible surfaces of a human body. In order to recognize invisible (self-occluded) surfaces of a human body, the proposed algorithm employs voxel classifiers trained with multi-layered synthetic voxels. Specifically, ray-casting onto a volumetric human model generates a synthetic voxel, where voxel consists of a 3D position and ID corresponding to the body part. The synthesized volumetric data which contain both visible and invisible body voxels are utilized to train the voxel classifiers. As a result, the voxel classifiers not only identify the visible voxels but also reconstruct the 3D positions and the IDs of the invisible voxels. The experimental results show improved performance on estimating the human poses due to the capability of inferring the invisible human body voxels. It is expected that the proposed algorithm can be applied to many fields such as telepresence, gaming, virtual fitting, wellness business, and real 3D contents control on real 3D displays.

  10. Real-time 3D ultrasound guidance of autonomous surgical robot for shrapnel detection and breast biopsy

    NASA Astrophysics Data System (ADS)

    Rogers, Albert J.; Light, Edward D.; von Allmen, Daniel; Smith, Stephen W.

    2009-02-01

    Two studies have been conducted using real time 3D ultrasound and an automated robot system for carrying out surgical tasks. The first task is to perform a breast lesion biopsy automatically after detection by ultrasound. Combining 3D ultrasound with traditional mammography allows real time guidance of the biopsy needle. Image processing techniques analyze volumes to calculate the location of a target lesion. This position was converted into the coordinate system of a three axis robot which moved a needle probe to touch the lesion. The second task is to remove shrapnel from a tissue phantom autonomously. In some emergency situations, shrapnel detection in the body is necessary for quick treatment. Furthermore, small or uneven shrapnel geometry may hinder location by typical ultrasound imaging methods. Vibrations and small displacements can be induced in ferromagnetic shrapnel by a variable electromagnet. We used real time 3D color Doppler to locate this motion for 2 mm long needle fragments and determined the 3D position of the fragment in the scanner coordinates. The rms error of the image guided robot for 5 trials was 1.06 mm for this task which was accomplished in 76 seconds.

  11. 3D cardiac μ tissues within a microfluidic device with real-time contractile stress readout

    PubMed Central

    Aung, Aereas; Bhullar, Ivneet Singh; Theprungsirikul, Jomkuan; Davey, Shruti Krishna; Lim, Han Liang; Chiu, Yu-Jui; Ma, Xuanyi; Dewan, Sukriti; Lo, Yu-Hwa; McCulloch, Andrew; Varghese, Shyni

    2015-01-01

    We present the development of three-dimensional (3D) cardiac microtissues within a microfluidic device with the ability to quantify real-time contractile stress measurements in situ. Using a 3D patterning technology that allows for the precise spatial distribution of cells within the device, we created an array of 3D cardiac microtissues from neonatal mouse cardiomyocytes. We integrated the 3D micropatterning technology with microfluidics to achieve perfused cell-laden structures. The cells were encapsulated within a degradable gelatin methacrylate hydrogel, which was sandwiched between two polyacrylamide hydrogels. The polyacrylamide hydrogels were used as “stress sensors” to acquire the contractile stresses generated by the beating cardiac cells. The cardiac-specific response of the engineered 3D system was examined by exposing it to epinephrine, an adrenergic neurotransmitter known to increase the magnitude and frequency of cardiac contractions. In response to exogenous epinephrine the engineered cardiac tissues exhibited an increased beating frequency and stress magnitude. Such cost-effective and easy-to-adapt 3D cardiac systems with real-time functional readout could be an attractive technological platform for drug discovery and development. PMID:26588203

  12. Real-time 3-D contrast-enhanced transcranial ultrasound and aberration correction.

    PubMed

    Ivancevich, Nikolas M; Pinton, Gianmarco F; Nicoletto, Heather A; Bennett, Ellen; Laskowitz, Daniel T; Smith, Stephen W

    2008-09-01

    Contrast-enhanced (CE) transcranial ultrasound (US) and reconstructed 3-D transcranial ultrasound have shown advantages over traditional methods in a variety of cerebrovascular diseases. We present the results from a novel ultrasound technique, namely real-time 3-D contrast-enhanced transcranial ultrasound. Using real-time 3-D (RT3D) ultrasound and microbubble contrast agent, we scanned 17 healthy volunteers via a single temporal window and nine via the suboccipital window and report our detection rates for the major cerebral vessels. In 71% of subjects, both of our observers identified the ipsilateral circle of Willis from the temporal window, and in 59% we imaged the entire circle of Willis. From the suboccipital window, both observers detected the entire vertebrobasilar circulation in 22% of subjects, and in 44%, the basilar artery. After performing phase aberration correction on one subject, we were able to increase the diagnostic value of the scan, detecting a vessel not present in the uncorrected scan. These preliminary results suggest that RT3D CE transcranial US and RT3D CE transcranial US with phase aberration correction have the potential to greatly impact the field of neurosonology.

  13. Real-Time 3D Contrast-Enhanced Transcranial Ultrasound and Aberration Correction

    PubMed Central

    Ivancevich, Nikolas M.; Pinton, Gianmarco F.; Nicoletto, Heather A.; Bennett, Ellen; Laskowitz, Daniel T.; Smith, Stephen W.

    2008-01-01

    Contrast-enhanced (CE) transcranial ultrasound (US) and reconstructed 3D transcranial ultrasound have shown advantages over traditional methods in a variety of cerebrovascular diseases. We present the results from a novel ultrasound technique, namely real-time 3D contrast-enhanced transcranial ultrasound. Using real-time 3D (RT3D) ultrasound and micro-bubble contrast agent, we scanned 17 healthy volunteers via a single temporal window and 9 via the sub-occipital window and report our detection rates for the major cerebral vessels. In 71% of subjects, both of our observers identified the ipsilateral circle of Willis from the temporal window, and in 59% we imaged the entire circle of Willis. From the sub-occipital window, both observers detected the entire vertebrobasilar circulation in 22% of subjects, and in 44% the basilar artery. After performing phase aberration correction on one subject, we were able to increase the diagnostic value of the scan, detecting a vessel not present in the uncorrected scan. These preliminary results suggest that RT3D CE transcranial US and RT3D CE transcranial US with phase aberration correction have the potential to greatly impact the field of neurosonology. PMID:18395321

  14. See-through 3D technology for augmented reality

    NASA Astrophysics Data System (ADS)

    Lee, Byoungho; Lee, Seungjae; Li, Gang; Jang, Changwon; Hong, Jong-Young

    2017-06-01

    Augmented reality is recently attracting a lot of attention as one of the most spotlighted next-generation technologies. In order to get toward realization of ideal augmented reality, we need to integrate 3D virtual information into real world. This integration should not be noticed by users blurring the boundary between the virtual and real worlds. Thus, ultimate device for augmented reality can reconstruct and superimpose 3D virtual information on the real world so that they are not distinguishable, which is referred to as see-through 3D technology. Here, we introduce our previous researches to combine see-through displays and 3D technologies using emerging optical combiners: holographic optical elements and index matched optical elements. Holographic optical elements are volume gratings that have angular and wavelength selectivity. Index matched optical elements are partially reflective elements using a compensation element for index matching. Using these optical combiners, we could implement see-through 3D displays based on typical methodologies including integral imaging, digital holographic displays, multi-layer displays, and retinal projection. Some of these methods are expected to be optimized and customized for head-mounted or wearable displays. We conclude with demonstration and analysis of fundamental researches for head-mounted see-through 3D displays.

  15. Mechanisms for Human Spatial Competence

    DTIC Science & Technology

    2007-01-01

    Published as Lecture Note: Gunzelmann, G., & Lyon, D. R. (2007). Mechanisms of human spatial competence. In M . K. T. Barkowsky, G. Ligozat, & D...the ACT-R community. References 1. Richardson, A., Montello, D., Hegarty, M .: Spatial Knowledge Acquisition from Maps, and from Navigation in Real...Rotation of Three-Dimensional Objects. Science 171, 701–703 (1971) 7. Just, M ., Carpenter, P.: Cognitive Coordinate Systems: Accounts of Mental

  16. Quantification of mitral valve morphology with three-dimensional echocardiography--can measurement lead to better management?

    PubMed

    Lee, Alex Pui-Wai; Fang, Fang; Jin, Chun-Na; Kam, Kevin Ka-Ho; Tsui, Gary K W; Wong, Kenneth K Y; Looi, Jen-Li; Wong, Randolph H L; Wan, Song; Sun, Jing Ping; Underwood, Malcolm J; Yu, Cheuk-Man

    2014-01-01

    The mitral valve (MV) has complex 3-dimensional (3D) morphology and motion. Advance in real-time 3D echocardiography (RT3DE) has revolutionized clinical imaging of the MV by providing clinicians with realistic visualization of the valve. Thus far, RT3DE of the MV structure and dynamics has adopted an approach that depends largely on subjective and qualitative interpretation of the 3D images of the valve, rather than objective and reproducible measurement. RT3DE combined with image-processing computer techniques provides precise segmentation and reliable quantification of the complex 3D morphology and rapid motion of the MV. This new approach to imaging may provide additional quantitative descriptions that are useful in diagnostic and therapeutic decision-making. Quantitative analysis of the MV using RT3DE has increased our understanding of the pathologic mechanism of degenerative, ischemic, functional, and rheumatic MV disease. Most recently, 3D morphologic quantification has entered into clinical use to provide more accurate diagnosis of MV disease and for planning surgery and transcatheter interventions. Current limitations of this quantitative approach to MV imaging include labor-intensiveness during image segmentation and lack of a clear definition of the clinical significance of many of the morphologic parameters. This review summarizes the current development and applications of quantitative analysis of the MV morphology using RT3DE.

  17. Detailed 3D representations for object recognition and modeling.

    PubMed

    Zia, M Zeeshan; Stark, Michael; Schiele, Bernt; Schindler, Konrad

    2013-11-01

    Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching.

  18. Elasticity-based three dimensional ultrasound real-time volume rendering

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Matinfar, Mohammad; Ahmad, Omar; Rivaz, Hassan; Choti, Michael; Taylor, Russell H.

    2009-02-01

    Volumetric ultrasound imaging has not gained wide recognition, despite the availability of real-time 3D ultrasound scanners and the anticipated potential of 3D ultrasound imaging in diagnostic and interventional radiology. Their use, however, has been hindered by the lack of real-time visualization methods that are capable of producing high quality 3D rendering of the target/surface of interest. Volume rendering is a known visualization method, which can display clear surfaces out of the acquired volumetric data, and has an increasing number of applications utilizing CT and MRI data. The key element of any volume rendering pipeline is the ability to classify the target/surface of interest by setting an appropriate opacity function. Practical and successful real-time 3D ultrasound volume rendering can be achieved in Obstetrics and Angio applications where setting these opacity functions can be done rapidly, and reliably. Unfortunately, 3D ultrasound volume rendering of soft tissues is a challenging task due to the presence of significant amount of noise and speckle. Recently, several research groups have shown the feasibility of producing 3D elasticity volume from two consecutive 3D ultrasound scans. This report describes a novel volume rendering pipeline utilizing elasticity information. The basic idea is to compute B-mode voxel opacity from the rapidly calculated strain values, which can also be mixed with conventional gradient based opacity function. We have implemented the volume renderer using GPU unit, which gives an update rate of 40 volume/sec.

  19. A novel 3D guidance system using augmented reality for percutaneous vertebroplasty: technical note.

    PubMed

    Abe, Yuichiro; Sato, Shigenobu; Kato, Koji; Hyakumachi, Takahiko; Yanagibashi, Yasushi; Ito, Manabu; Abumi, Kuniyoshi

    2013-10-01

    Augmented reality (AR) is an imaging technology by which virtual objects are overlaid onto images of real objects captured in real time by a tracking camera. This study aimed to introduce a novel AR guidance system called virtual protractor with augmented reality (VIPAR) to visualize a needle trajectory in 3D space during percutaneous vertebroplasty (PVP). The AR system used for this study comprised a head-mount display (HMD) with a tracking camera and a marker sheet. An augmented scene was created by overlaying the preoperatively generated needle trajectory path onto a marker detected on the patient using AR software, thereby providing the surgeon with augmented views in real time through the HMD. The accuracy of the system was evaluated by using a computer-generated simulation model in a spine phantom and also evaluated clinically in 5 patients. In the 40 spine phantom trials, the error of the insertion angle (EIA), defined as the difference between the attempted angle and the insertion angle, was evaluated using 3D CT scanning. Computed tomography analysis of the 40 spine phantom trials showed that the EIA in the axial plane significantly improved when VIPAR was used compared with when it was not used (0.96° ± 0.61° vs 4.34° ± 2.36°, respectively). The same held true for EIA in the sagittal plane (0.61° ± 0.70° vs 2.55° ± 1.93°, respectively). In the clinical evaluation of the AR system, 5 patients with osteoporotic vertebral fractures underwent VIPAR-guided PVP from October 2011 to May 2012. The postoperative EIA was evaluated using CT. The clinical results of the 5 patients showed that the EIA in all 10 needle insertions was 2.09° ± 1.3° in the axial plane and 1.98° ± 1.8° in the sagittal plane. There was no pedicle breach or leakage of polymethylmethacrylate. VIPAR was successfully used to assist in needle insertion during PVP by providing the surgeon with an ideal insertion point and needle trajectory through the HMD. The findings indicate that AR guidance technology can become a useful assistive device during spine surgeries requiring percutaneous procedures.

  20. Efficient 3D Watermarked Video Communication with Chaotic Interleaving, Convolution Coding, and LMMSE Equalization

    NASA Astrophysics Data System (ADS)

    El-Shafai, W.; El-Bakary, E. M.; El-Rabaie, S.; Zahran, O.; El-Halawany, M.; Abd El-Samie, F. E.

    2017-06-01

    Three-Dimensional Multi-View Video (3D-MVV) transmission over wireless networks suffers from Macro-Blocks losses due to either packet dropping or fading-motivated bit errors. Thus, the robust performance of 3D-MVV transmission schemes over wireless channels becomes a recent considerable hot research issue due to the restricted resources and the presence of severe channel errors. The 3D-MVV is composed of multiple video streams shot by several cameras around a single object, simultaneously. Therefore, it is an urgent task to achieve high compression ratios to meet future bandwidth constraints. Unfortunately, the highly-compressed 3D-MVV data becomes more sensitive and vulnerable to packet losses, especially in the case of heavy channel faults. Thus, in this paper, we suggest the application of a chaotic Baker interleaving approach with equalization and convolution coding for efficient Singular Value Decomposition (SVD) watermarked 3D-MVV transmission over an Orthogonal Frequency Division Multiplexing wireless system. Rayleigh fading and Additive White Gaussian Noise are considered in the real scenario of 3D-MVV transmission. The SVD watermarked 3D-MVV frames are primarily converted to their luminance and chrominance components, which are then converted to binary data format. After that, chaotic interleaving is applied prior to the modulation process. It is used to reduce the channel effects on the transmitted bit streams and it also adds a degree of encryption to the transmitted 3D-MVV frames. To test the performance of the proposed framework; several simulation experiments on different SVD watermarked 3D-MVV frames have been executed. The experimental results show that the received SVD watermarked 3D-MVV frames still have high Peak Signal-to-Noise Ratios and watermark extraction is possible in the proposed framework.

Top