Science.gov

Sample records for 3d object retrieval

  1. Sketch-driven mental 3D object retrieval

    NASA Astrophysics Data System (ADS)

    Napoléon, Thibault; Sahbi, Hichem

    2010-02-01

    3D object recognition and retrieval recently gained a big interest because of the limitation of the "2D-to-2D" approaches. The latter suffer from several drawbacks such as the lack of information (due for instance to occlusion), pose sensitivity, illumination changes, etc. Our main motivation is to gather both discrimination and easy interaction by allowing simple (but multiple) 2D specifications of queries and their retrieval into 3D gallery sets. We introduce a novel "2D sketch-to-3D model" retrieval framework with the following contributions: (i) first a novel generative approach for aligning and normalizing the pose of 3D gallery objects and extracting their 2D canonical views is introduced. (ii) Afterwards, robust and compact contour signatures are extracted using the set of 2D canonical views. We also introduce a pruning approach to speedup the whole search process in a coarseto- fine way. (iii) Finally, object ranking is performed using our variant of elastic dynamic programming which considers only a subset of possible matches thereby providing a considerable gain in performance for the same amount of errors. Our experiments are reported/compared through the Princeton Shape Benchmark; clearly showing the good performance of our framework w.r.t. the other approaches. An iPhone demo of this method is available and allows us to achieve "2D sketch to 3D object" querying and interaction.

  2. Efficient view based 3-D object retrieval using Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Jain, Yogendra Kumar; Singh, Roshan Kumar

    2013-12-01

    Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.

  3. Intraclass retrieval of nonrigid 3D objects: application to face recognition.

    PubMed

    Passalis, Georgios; Kakadiaris, Ioannis A; Theoharis, Theoharis

    2007-02-01

    As the size of the available collections of 3D objects grows, database transactions become essential for their management with the key operation being retrieval (query). Large collections are also precategorized into classes so that a single class contains objects of the same type (e.g., human faces, cars, four-legged animals). It is shown that general object retrieval methods are inadequate for intraclass retrieval tasks. We advocate that such intraclass problems require a specialized method that can exploit the basic class characteristics in order to achieve higher accuracy. A novel 3D object retrieval method is presented which uses a parameterized annotated model of the shape of the class objects, incorporating its main characteristics. The annotated subdivision-based model is fitted onto objects of the class using a deformable model framework, converted to a geometry image and transformed into the wavelet domain. Object retrieval takes place in the wavelet domain. The method does not require user interaction, achieves high accuracy, is efficient for use with large databases, and is suitable for nonrigid object classes. We apply our method to the face recognition domain, one of the most challenging intraclass retrieval tasks. We used the Face Recognition Grand Challenge v2 database, yielding an average verification rate of 95.2 percent at 10-3 false accept rate. The latest results of our work can be found at http://www.cbl.uh.edu/UR8D/.

  4. Visual Semantic Based 3D Video Retrieval System Using HDFS

    PubMed Central

    Kumar, C.Ranjith; Suguna, S.

    2016-01-01

    This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose, we intent to hitch on BOVW and Mapreduce in 3D framework. Instead of conventional shape based local descriptors, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook and histogram is produced. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and acknowledged to the user as a feedback .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we future the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy. PMID:28003793

  5. Visual Semantic Based 3D Video Retrieval System Using HDFS.

    PubMed

    Kumar, C Ranjith; Suguna, S

    2016-08-01

    This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose, we intent to hitch on BOVW and Mapreduce in 3D framework. Instead of conventional shape based local descriptors, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook and histogram is produced. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and acknowledged to the user as a feedback .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we future the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy.

  6. 3D model retrieval method based on mesh segmentation

    NASA Astrophysics Data System (ADS)

    Gan, Yuanchao; Tang, Yan; Zhang, Qingchen

    2012-04-01

    In the process of feature description and extraction, current 3D model retrieval algorithms focus on the global features of 3D models but ignore the combination of global and local features of the model. For this reason, they show less effective performance to the models with similar global shape and different local shape. This paper proposes a novel algorithm for 3D model retrieval based on mesh segmentation. The key idea is to exact the structure feature and the local shape feature of 3D models, and then to compares the similarities of the two characteristics and the total similarity between the models. A system that realizes this approach was built and tested on a database of 200 objects and achieves expected results. The results show that the proposed algorithm improves the precision and the recall rate effectively.

  7. Perception-based shape retrieval for 3D building models

    NASA Astrophysics Data System (ADS)

    Zhang, Man; Zhang, Liqiang; Takis Mathiopoulos, P.; Ding, Yusi; Wang, Hao

    2013-01-01

    With the help of 3D search engines, a large number of 3D building models can be retrieved freely online. A serious disadvantage of most rotation-insensitive shape descriptors is their inability to distinguish between two 3D building models which are different at their main axes, but appear similar when one of them is rotated. To resolve this problem, we present a novel upright-based normalization method which not only correctly rotates such building models, but also greatly simplifies and accelerates the abstraction and the matching of building models' shape descriptors. Moreover, the abundance of architectural styles significantly hinders the effective shape retrieval of building models. Our research has shown that buildings with different designs are not well distinguished by the widely recognized shape descriptors for general 3D models. Motivated by this observation and to further improve the shape retrieval quality, a new building matching method is introduced and analyzed based on concepts found in the field of perception theory and the well-known Light Field descriptor. The resulting normalized building models are first classified using the qualitative shape descriptors of Shell and Unevenness which outline integral geometrical and topological information. These models are then put in on orderly fashion with the help of an improved quantitative shape descriptor which we will term as Horizontal Light Field Descriptor, since it assembles detailed shape characteristics. To accurately evaluate the proposed methodology, an enlarged building shape database which extends previous well-known shape benchmarks was implemented as well as a model retrieval system supporting inputs from 2D sketches and 3D models. Various experimental performance evaluation results have shown that, as compared to previous methods, retrievals employing the proposed matching methodology are faster and more consistent with human recognition of spatial objects. In addition these performance

  8. Hough transform-based 3D mesh retrieval

    NASA Astrophysics Data System (ADS)

    Zaharia, Titus; Preteux, Francoise J.

    2001-11-01

    This papre addresses the issue of 3D mesh indexation by using shape descriptors (SDs) under constraints of geometric and topological invariance. A new shape descriptor, the Optimized 3D Hough Transform Descriptor (O3HTD) is here proposed. Intrinsically topologically stable, the O3DHTD is not invariant to geometric transformations. Nevertheless, we show mathematically how the O3DHTD can be optimally associated (in terms of compactness of representation and computational complexity) with a spatial alignment procedure which leads to a geometric invariant behavior. Experimental results have been carried out upon the MPEG-7 3D model database consisting of about 1300 meshes in VRML 2.0 format. Objective retrieval results, based upon the definition of a categorized ground truth subset, are reported in terms of Bull Eye Percentage (BEP) score and compared to those obtained by applying the MPEg-7 3D SD. It is shown that the O3DHTD outperforms the MPEg-7 3D SD of up to 28%.

  9. Watermarking 3D Objects for Verification

    DTIC Science & Technology

    1999-01-01

    signal ( audio /image/video) pro- cessing and steganography fields, and even newer to the computer graphics community. Inherently, digital watermarking of...Many view digital watermarking as a potential solution for copyright protection of valuable digital materials like CD-quality audio , publication...watermark. The object can be an image, an audio clip, a video clip, or a 3D model. Some papers discuss watermarking other forms of multime- dia data

  10. Recognizing 3D Object Using Photometric Invariant.

    DTIC Science & Technology

    1995-02-01

    model and the data space coordinates, using centroid invariance of corresponding groups of feature positions. Tests are given to show the stability and...positions in the model and the data space coordinates, using centroid invariance of corresponding groups of feature positions. Tests are given to show the...ognizing 3D objects. In our testing , it took only 0.2 seconds to derive corresponding positions in the model and the image for natural pictures. 2

  11. 3-D object-oriented image analysis of geophysical data

    NASA Astrophysics Data System (ADS)

    Fadel, I.; Kerle, N.; van der Meijde, M.

    2014-07-01

    Geophysical data are the main source of information about the subsurface. Geophysical techniques are, however, highly non-unique in determining specific physical parameters and boundaries of subsurface objects. To obtain actual physical information, an inversion process is often applied, in which measurements at or above the Earth surface are inverted into a 2- or 3-D subsurface spatial distribution of the physical property. Interpreting these models into structural objects, related to physical processes, requires a priori knowledge and expert analysis which is susceptible to subjective choices and is therefore often non-repeatable. In this research, we implemented a recently introduced object-based approach to interpret the 3-D inversion results of a single geophysical technique using the available a priori information and the physical and geometrical characteristics of the interpreted objects. The introduced methodology is semi-automatic and repeatable, and allows the extraction of subsurface structures using 3-D object-oriented image analysis (3-D OOA) in an objective knowledge-based classification scheme. The approach allows for a semi-objective setting of thresholds that can be tested and, if necessary, changed in a very fast and efficient way. These changes require only changing the thresholds used in a so-called ruleset, which is composed of algorithms that extract objects from a 3-D data cube. The approach is tested on a synthetic model, which is based on a priori knowledge on objects present in the study area (Tanzania). Object characteristics and thresholds were well defined in a 3-D histogram of velocity versus depth, and objects were fully retrieved. The real model results showed how 3-D OOA can deal with realistic 3-D subsurface conditions in which the boundaries become fuzzy, the object extensions become unclear and the model characteristics vary with depth due to the different physical conditions. As expected, the 3-D histogram of the real data was

  12. Representation and classification of 3-D objects.

    PubMed

    Csakany, P; Wallace, A M

    2003-01-01

    This paper addresses the problem of generic object classification from three-dimensional depth or meshed data. First, surface patches are segmented on the basis of differential geometry and quadratic surface fitting. These are represented by a modified Gaussian image that includes the well-known shape index. Learning is an interactive process in which a human teacher indicates corresponding patches, but the formation of generic classes is unaided. Classification of unknown objects is based on the measurement of similarities between feature sets of the objects and the generic classes. The process is demonstrated on a group of three-dimensional (3-D) objects built from both CAD and laser-scanned depth data.

  13. Deep Nonlinear Metric Learning for 3-D Shape Retrieval.

    PubMed

    Xie, Jin; Dai, Guoxian; Zhu, Fan; Shao, Ling; Fang, Yi

    2016-12-28

    Effective 3-D shape retrieval is an important problem in 3-D shape analysis. Recently, feature learning-based shape retrieval methods have been widely studied, where the distance metrics between 3-D shape descriptors are usually hand-crafted. In this paper, motivated by the fact that deep neural network has the good ability to model nonlinearity, we propose to learn an effective nonlinear distance metric between 3-D shape descriptors for retrieval. First, the locality-constrained linear coding method is employed to encode each vertex on the shape and the encoding coefficient histogram is formed as the global 3-D shape descriptor to represent the shape. Then, a novel deep metric network is proposed to learn a nonlinear transformation to map the 3-D shape descriptors to a nonlinear feature space. The proposed deep metric network minimizes a discriminative loss function that can enforce the similarity between a pair of samples from the same class to be small and the similarity between a pair of samples from different classes to be large. Finally, the distance between the outputs of the metric network is used as the similarity for shape retrieval. The proposed method is evaluated on the McGill, SHREC'10 ShapeGoogle, and SHREC'14 Human shape datasets. Experimental results on the three datasets validate the effectiveness of the proposed method.

  14. 3-D object recognition using 2-D views.

    PubMed

    Li, Wenjing; Bebis, George; Bourbakis, Nikolaos G

    2008-11-01

    We consider the problem of recognizing 3-D objects from 2-D images using geometric models and assuming different viewing angles and positions. Our goal is to recognize and localize instances of specific objects (i.e., model-based) in a scene. This is in contrast to category-based object recognition methods where the goal is to search for instances of objects that belong to a certain visual category (e.g., faces or cars). The key contribution of our work is improving 3-D object recognition by integrating Algebraic Functions of Views (AFoVs), a powerful framework for predicting the geometric appearance of an object due to viewpoint changes, with indexing and learning. During training, we compute the space of views that groups of object features can produce under the assumption of 3-D linear transformations, by combining a small number of reference views that contain the object features using AFoVs. Unrealistic views (e.g., due to the assumption of 3-D linear transformations) are eliminated by imposing a pair of rigidity constraints based on knowledge of the transformation between the reference views of the object. To represent the space of views that an object can produce compactly while allowing efficient hypothesis generation during recognition, we propose combining indexing with learning in two stages. In the first stage, we sample the space of views of an object sparsely and represent information about the samples using indexing. In the second stage, we build probabilistic models of shape appearance by sampling the space of views of the object densely and learning the manifold formed by the samples. Learning employs the Expectation-Maximization (EM) algorithm and takes place in a "universal," lower-dimensional, space computed through Random Projection (RP). During recognition, we extract groups of point features from the scene and we use indexing to retrieve the most feasible model groups that might have produced them (i.e., hypothesis generation). The likelihood

  15. 3D PDF - a means of public access to geological 3D - objects, using the example of GTA3D

    NASA Astrophysics Data System (ADS)

    Slaby, Mark-Fabian; Reimann, Rüdiger

    2013-04-01

    In geology, 3D modeling has become very important. In the past, two-dimensional data such as isolines, drilling profiles, or cross-sections based on those, were used to illustrate the subsurface geology, whereas now, we can create complex digital 3D models. These models are produced with special software, such as GOCAD ®. The models can be viewed, only through the software used to create them, or through viewers available for free. The platform-independent PDF (Portable Document Format), enforced by Adobe, has found a wide distribution. This format has constantly evolved over time. Meanwhile, it is possible to display CAD data in an Adobe 3D PDF file with the free Adobe Reader (version 7). In a 3D PDF, a 3D model is freely rotatable and can be assembled from a plurality of objects, which can thus be viewed from all directions on their own. In addition, it is possible to create moveable cross-sections (profiles), and to assign transparency to the objects. Based on industry-standard CAD software, 3D PDFs can be generated from a large number of formats, or even be exported directly from this software. In geoinformatics, different approaches to creating 3D PDFs exist. The intent of the Authority for Mining, Energy and Geology to allow free access to the models of the Geotectonic Atlas (GTA3D), could not be realized with standard software solutions. A specially designed code converts the 3D objects to VRML (Virtual Reality Modeling Language). VRML is one of the few formats that allow using image files (maps) as textures, and to represent colors and shapes correctly. The files were merged in Acrobat X Pro, and a 3D PDF was generated subsequently. A topographic map, a display of geographic directions and horizontal and vertical scales help to facilitate the use.

  16. 3D-shape-based retrieval within the MPEG-7 framework

    NASA Astrophysics Data System (ADS)

    Zaharia, Titus; Preteux, Francoise J.

    2001-05-01

    Because of the continuous development of multimedia technologies, virtual worlds and augmented reality, 3D contents become a common feature of the today information systems. Hence, standardizing tools for content-based indexing of visual data is a key issue for computer vision related applications. Within the framework of the future MPEG-7 standard, tools for intelligent content-based access to 3D information, targeting applications such as search and retrieval and browsing of 3D model databases, have been recently considered and evaluated. In this paper, we present the 3D Shape Spectrum Descriptor (3D SSD), recently adopted within the current MPEG-7 Committee Draft (CD). The proposed descriptor aims at providing an intrinsic shape description of a 3D mesh and is defined as the distribution of the shape index over the entire mesh. The shape index is a local geometric attribute of a 3D surface, expressed as the angular coordinate of a polar representation of the principal curvature vector. Experimental results have been carried out upon the MPEG-7 3D model database consisting of about 1300 meshes in VRML 2.0 format. Objective retrieval results, based upon the definition of a ground truth subset, are reported in terms of Bull Eye Percentage (BEP) score.

  17. 3D model retrieval using probability density-based shape descriptors.

    PubMed

    Akgül, Ceyhun Burak; Sankur, Bülent; Yemez, Yücel; Schmitt, Francis

    2009-06-01

    We address content-based retrieval of complete 3D object models by a probabilistic generative description of local shape properties. The proposed shape description framework characterizes a 3D object with sampled multivariate probability density functions of its local surface features. This density-based descriptor can be efficiently computed via kernel density estimation (KDE) coupled with fast Gauss transform. The non-parametric KDE technique allows reliable characterization of a diverse set of shapes and yields descriptors which remain relatively insensitive to small shape perturbations and mesh resolution. Density-based characterization also induces a permutation property which can be used to guarantee invariance at the shape matching stage. As proven by extensive retrieval experiments on several 3D databases, our framework provides state-of-the-art discrimination over a broad and heterogeneous set of shape categories.

  18. 3D wind field retrieval from spaceborne Doppler radar

    NASA Astrophysics Data System (ADS)

    Lemaêtre, Y.; Viltard, N.

    2012-11-01

    Numerous space missions carrying a radar are presently envisioned, particularly to study tropical rain systems. Among those missions, BOITATA is a joint effort between Brazil (INPE/AEB) and France (CNES). The goal is to embark a Doppler radar with scanning possibilities onboard a low-orbiting satellite. This instrument should be implemented in addition to a Passive Microwave Radiometer (PMR) between 19 and 183 GHz, an improved ScaraB-like broadband radiometer, a mm/submm PMR and a lightning detection instrument. This package would be meant to document the feedback of the ice microphysics on the rain systems life cycle and on their heat and radiative budgets. Since the microphysics and the water and energy budgets are strongly driven by the dynamics, the addition of a Doppler radar with scanning possibilities could provide precious information (3D wind and rain fields). It would allow us to build a large statistics of such critical information over the entire tropics and for all the stages of development of the convection. This information could be used to better understand the tropical convection and to improve convection parameterization relevant for cloud and climate models and associated applications such as now-casting and risk prevention. The present work focuses on the feasibility to retrieve 3D winds in precipitating areas from such a radar. A simulator of some parts of the spaceborne radar is developed to estimate the precision on the retrieved wind field depending on the scanning strategies and instrumental parameters and to determine the best sampling parameters.

  19. Optical 3D imaging and visualization of concealed objects

    NASA Astrophysics Data System (ADS)

    Berginc, G.; Bellet, J.-B.; Berechet, I.; Berechet, S.

    2016-09-01

    This paper gives new insights on optical 3D imagery. In this paper we explore the advantages of laser imagery to form a three-dimensional image of the scene. 3D laser imaging can be used for three-dimensional medical imaging and surveillance because of ability to identify tumors or concealed objects. We consider the problem of 3D reconstruction based upon 2D angle-dependent laser images. The objective of this new 3D laser imaging is to provide users a complete 3D reconstruction of objects from available 2D data limited in number. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different meshed objects of the scene of interest or from experimental 2D laser images. We show that combining the Radom transform on 2D laser images with the Maximum Intensity Projection can generate 3D views of the considered scene from which we can extract the 3D concealed object in real time. With different original numerical or experimental examples, we investigate the effects of the input contrasts. We show the robustness and the stability of the method. We have developed a new patented method of 3D laser imaging based on three-dimensional reflective tomographic reconstruction algorithms and an associated visualization method. In this paper we present the global 3D reconstruction and visualization procedures.

  20. Design of 3d Topological Data Structure for 3d Cadastre Objects

    NASA Astrophysics Data System (ADS)

    Zulkifli, N. A.; Rahman, A. Abdul; Hassan, M. I.

    2016-09-01

    This paper describes the design of 3D modelling and topological data structure for cadastre objects based on Land Administration Domain Model (LADM) specifications. Tetrahedral Network (TEN) is selected as a 3D topological data structure for this project. Data modelling is based on the LADM standard and it is used five classes (i.e. point, boundary face string, boundary face, tetrahedron and spatial unit). This research aims to enhance the current cadastral system by incorporating 3D topology model based on LADM standard.

  1. 3D object recognition based on local descriptors

    NASA Astrophysics Data System (ADS)

    Jakab, Marek; Benesova, Wanda; Racev, Marek

    2015-01-01

    In this paper, we propose an enhanced method of 3D object description and recognition based on local descriptors using RGB image and depth information (D) acquired by Kinect sensor. Our main contribution is focused on an extension of the SIFT feature vector by the 3D information derived from the depth map (SIFT-D). We also propose a novel local depth descriptor (DD) that includes a 3D description of the key point neighborhood. Thus defined the 3D descriptor can then enter the decision-making process. Two different approaches have been proposed, tested and evaluated in this paper. First approach deals with the object recognition system using the original SIFT descriptor in combination with our novel proposed 3D descriptor, where the proposed 3D descriptor is responsible for the pre-selection of the objects. Second approach demonstrates the object recognition using an extension of the SIFT feature vector by the local depth description. In this paper, we present the results of two experiments for the evaluation of the proposed depth descriptors. The results show an improvement in accuracy of the recognition system that includes the 3D local description compared with the same system without the 3D local description. Our experimental system of object recognition is working near real-time.

  2. Detailed 3D representations for object recognition and modeling.

    PubMed

    Zia, M Zeeshan; Stark, Michael; Schiele, Bernt; Schindler, Konrad

    2013-11-01

    Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching.

  3. Identifying positioning-based attacks against 3D printed objects and the 3D printing process

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2017-05-01

    Zeltmann, et al. demonstrated that structural integrity and other quality damage to objects can be caused by changing its position on a 3D printer's build plate. On some printers, for example, object surfaces and support members may be stronger when oriented parallel to the X or Y axis. The challenge presented by the need to assure 3D printed object orientation is that this can be altered in numerous places throughout the system. This paper considers attack scenarios and discusses where attacks that change printing orientation can occur in the process. An imaging-based solution to combat this problem is presented.

  4. A novel 3D shape descriptor for automatic retrieval of anatomical structures from medical images

    NASA Astrophysics Data System (ADS)

    Nunes, Fátima L. S.; Bergamasco, Leila C. C.; Delmondes, Pedro H.; Valverde, Miguel A. G.; Jackowski, Marcel P.

    2017-03-01

    Content-based image retrieval (CBIR) aims at retrieving from a database objects that are similar to an object provided by a query, by taking into consideration a set of extracted features. While CBIR has been widely applied in the two-dimensional image domain, the retrieval of3D objects from medical image datasets using CBIR remains to be explored. In this context, the development of descriptors that can capture information specific to organs or structures is desirable. In this work, we focus on the retrieval of two anatomical structures commonly imaged by Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) techniques, the left ventricle of the heart and blood vessels. Towards this aim, we developed the Area-Distance Local Descriptor (ADLD), a novel 3D local shape descriptor that employs mesh geometry information, namely facet area and distance from centroid to surface, to identify shape changes. Because ADLD only considers surface meshes extracted from volumetric medical images, it substantially diminishes the amount of data to be analyzed. A 90% precision rate was obtained when retrieving both convex (left ventricle) and non-convex structures (blood vessels), allowing for detection of abnormalities associated with changes in shape. Thus, ADLD has the potential to aid in the diagnosis of a wide range of vascular and cardiac diseases.

  5. 3D model-based still image object categorization

    NASA Astrophysics Data System (ADS)

    Petre, Raluca-Diana; Zaharia, Titus

    2011-09-01

    This paper proposes a novel recognition scheme algorithm for semantic labeling of 2D object present in still images. The principle consists of matching unknown 2D objects with categorized 3D models in order to infer the semantics of the 3D object to the image. We tested our new recognition framework by using the MPEG-7 and Princeton 3D model databases in order to label unknown images randomly selected from the web. Results obtained show promising performances, with recognition rate up to 84%, which opens interesting perspectives in terms of semantic metadata extraction from still images/videos.

  6. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  7. Geographic Video 3d Data Model And Retrieval

    NASA Astrophysics Data System (ADS)

    Han, Z.; Cui, C.; Kong, Y.; Wu, H.

    2014-04-01

    Geographic video includes both spatial and temporal geographic features acquired through ground-based or non-ground-based cameras. With the popularity of video capture devices such as smartphones, the volume of user-generated geographic video clips has grown significantly and the trend of this growth is quickly accelerating. Such a massive and increasing volume poses a major challenge to efficient video management and query. Most of the today's video management and query techniques are based on signal level content extraction. They are not able to fully utilize the geographic information of the videos. This paper aimed to introduce a geographic video 3D data model based on spatial information. The main idea of the model is to utilize the location, trajectory and azimuth information acquired by sensors such as GPS receivers and 3D electronic compasses in conjunction with video contents. The raw spatial information is synthesized to point, line, polygon and solid according to the camcorder parameters such as focal length and angle of view. With the video segment and video frame, we defined the three categories geometry object using the geometry model of OGC Simple Features Specification for SQL. We can query video through computing the spatial relation between query objects and three categories geometry object such as VFLocation, VSTrajectory, VSFOView and VFFovCone etc. We designed the query methods using the structured query language (SQL) in detail. The experiment indicate that the model is a multiple objective, integration, loosely coupled, flexible and extensible data model for the management of geographic stereo video.

  8. An Evaluative Review of Simulated Dynamic Smart 3d Objects

    NASA Astrophysics Data System (ADS)

    Romeijn, H.; Sheth, F.; Pettit, C. J.

    2012-07-01

    Three-dimensional (3D) modelling of plants can be an asset for creating agricultural based visualisation products. The continuum of 3D plants models ranges from static to dynamic objects, also known as smart 3D objects. There is an increasing requirement for smarter simulated 3D objects that are attributed mathematically and/or from biological inputs. A systematic approach to plant simulation offers significant advantages to applications in agricultural research, particularly in simulating plant behaviour and the influences of external environmental factors. This approach of 3D plant object visualisation is primarily evident from the visualisation of plants using photographed billboarded images, to more advanced procedural models that come closer to simulating realistic virtual plants. However, few programs model physical reactions of plants to external factors and even fewer are able to grow plants based on mathematical and/or biological parameters. In this paper, we undertake an evaluation of plant-based object simulation programs currently available, with a focus upon the components and techniques involved in producing these objects. Through an analytical review process we consider the strengths and weaknesses of several program packages, the features and use of these programs and the possible opportunities in deploying these for creating smart 3D plant-based objects to support agricultural research and natural resource management. In creating smart 3D objects the model needs to be informed by both plant physiology and phenology. Expert knowledge will frame the parameters and procedures that will attribute the object and allow the simulation of dynamic virtual plants. Ultimately, biologically smart 3D virtual plants that react to changes within an environment could be an effective medium to visually represent landscapes and communicate land management scenarios and practices to planners and decision-makers.

  9. 3D object hiding using three-dimensional ptychography

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Wang, Zhibo; Li, Tuo; Pan, An; Wang, Yali; Shi, Yishi

    2016-09-01

    We present a novel technique for 3D object hiding by applying three-dimensional ptychography. Compared with 3D information hiding based on holography, the proposed ptychography-based hiding technique is easier to implement, because the reference beam and high-precision interferometric optical setup are not required. The acquisition of the 3D object and the ptychographic encoding process are performed optically. Owing to the introduction of probe keys, the security of the ptychography-based hiding system is significantly enhanced. A series of experiments and simulations demonstrate the feasibility and imperceptibility of the proposed method.

  10. 3D laser imaging for concealed object identification

    NASA Astrophysics Data System (ADS)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  11. Multi-Modal Clique-Graph Matching for View-Based 3D Model Retrieval.

    PubMed

    Liu, An-An; Nie, Wei-Zhi; Gao, Yue; Su, Yu-Ting

    2016-05-01

    Multi-view matching is an important but a challenging task in view-based 3D model retrieval. To address this challenge, we propose an original multi-modal clique graph (MCG) matching method in this paper. We systematically present a method for MCG generation that is composed of cliques, which consist of neighbor nodes in multi-modal feature space and hyper-edges that link pairwise cliques. Moreover, we propose an image set-based clique/edgewise similarity measure to address the issue of the set-to-set distance measure, which is the core problem in MCG matching. The proposed MCG provides the following benefits: 1) preserves the local and global attributes of a graph with the designed structure; 2) eliminates redundant and noisy information by strengthening inliers while suppressing outliers; and 3) avoids the difficulty of defining high-order attributes and solving hyper-graph matching. We validate the MCG-based 3D model retrieval using three popular single-modal data sets and one novel multi-modal data set. Extensive experiments show the superiority of the proposed method through comparisons. Moreover, we contribute a novel real-world 3D object data set, the multi-view RGB-D object data set. To the best of our knowledge, it is the largest real-world 3D object data set containing multi-modal and multi-view information.

  12. 3D dimeron as a stable topological object

    NASA Astrophysics Data System (ADS)

    Yang, Shijie; Liu, Yongkai

    2015-03-01

    Searching for novel topological objects is always an intriguing task for scientists in various fields. We study a new three-dimensional (3D) topological structure called 3D dimeron in the trapped two-component Bose-Einstein condensates. The 3D dimeron differs to the conventional 3D skyrmion for the condensates hosting two interlocked vortex-rings. We demonstrate that the vortex-rings are connected by a singular string and the complexity constitutes a vortex-molecule. The stability is investigated through numerically evolving the Gross-Pitaevskii equations, giving a coherent Rabi coupling between the two components. Alternatively, we find that the stable 3D dimeron can be naturally generated from a vortex-free Gaussian wave packet via incorporating a synthetic non-Abelian gauge potential into the condensates. This work is supported by the NSF of China under Grant No. 11374036 and the National 973 program under Grant No. 2012CB821403.

  13. Embedding objects during 3D printing to add new functionalities

    PubMed Central

    2016-01-01

    A novel method for integrating and embedding objects to add new functionalities during 3D printing based on fused deposition modeling (FDM) (also known as fused filament fabrication or molten polymer deposition) is presented. Unlike typical 3D printing, FDM-based 3D printing could allow objects to be integrated and embedded during 3D printing and the FDM-based 3D printed devices do not typically require any post-processing and finishing. Thus, various fluidic devices with integrated glass cover slips or polystyrene films with and without an embedded porous membrane, and optical devices with embedded Corning® Fibrance™ Light-Diffusing Fiber were 3D printed to demonstrate the versatility of the FDM-based 3D printing and embedding method. Fluid perfusion flow experiments with a blue colored food dye solution were used to visually confirm fluid flow and/or fluid perfusion through the embedded porous membrane in the 3D printed fluidic devices. Similar to typical 3D printed devices, FDM-based 3D printed devices are translucent at best unless post-polishing is performed and optical transparency is highly desirable in any fluidic devices; integrated glass cover slips or polystyrene films would provide a perfect optical transparent window for observation and visualization. In addition, they also provide a compatible flat smooth surface for biological or biomolecular applications. The 3D printed fluidic devices with an embedded porous membrane are applicable to biological or chemical applications such as continuous perfusion cell culture or biocatalytic synthesis but without the need for any post-device assembly and finishing. The 3D printed devices with embedded Corning® Fibrance™ Light-Diffusing Fiber would have applications in display, illumination, or optical applications. Furthermore, the FDM-based 3D printing and embedding method could also be utilized to print casting molds with an integrated glass bottom for polydimethylsiloxane (PDMS) device replication. These

  14. Embedding objects during 3D printing to add new functionalities.

    PubMed

    Yuen, Po Ki

    2016-07-01

    A novel method for integrating and embedding objects to add new functionalities during 3D printing based on fused deposition modeling (FDM) (also known as fused filament fabrication or molten polymer deposition) is presented. Unlike typical 3D printing, FDM-based 3D printing could allow objects to be integrated and embedded during 3D printing and the FDM-based 3D printed devices do not typically require any post-processing and finishing. Thus, various fluidic devices with integrated glass cover slips or polystyrene films with and without an embedded porous membrane, and optical devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber were 3D printed to demonstrate the versatility of the FDM-based 3D printing and embedding method. Fluid perfusion flow experiments with a blue colored food dye solution were used to visually confirm fluid flow and/or fluid perfusion through the embedded porous membrane in the 3D printed fluidic devices. Similar to typical 3D printed devices, FDM-based 3D printed devices are translucent at best unless post-polishing is performed and optical transparency is highly desirable in any fluidic devices; integrated glass cover slips or polystyrene films would provide a perfect optical transparent window for observation and visualization. In addition, they also provide a compatible flat smooth surface for biological or biomolecular applications. The 3D printed fluidic devices with an embedded porous membrane are applicable to biological or chemical applications such as continuous perfusion cell culture or biocatalytic synthesis but without the need for any post-device assembly and finishing. The 3D printed devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber would have applications in display, illumination, or optical applications. Furthermore, the FDM-based 3D printing and embedding method could also be utilized to print casting molds with an integrated glass bottom for polydimethylsiloxane (PDMS) device replication

  15. 3D object recognition in TOF data sets

    NASA Astrophysics Data System (ADS)

    Hess, Holger; Albrecht, Martin; Grothof, Markus; Hussmann, Stephan; Oikonomidis, Nikolaos; Schwarte, Rudolf

    2003-08-01

    In the last years 3D-Vision systems based on the Time-Of-Flight (TOF) principle have gained more importance than Stereo Vision (SV). TOF offers a direct depth-data acquisition, whereas SV involves a great amount of computational power for a comparable 3D data set. Due to the enormous progress in TOF-techniques, nowadays 3D cameras can be manufactured and be used for many practical applications. Hence there is a great demand for new accurate algorithms for 3D object recognition and classification. This paper presents a new strategy and algorithm designed for a fast and solid object classification. A challenging example - accurate classification of a (half-) sphere - demonstrates the performance of the developed algorithm. Finally, the transition from a general model of the system to specific applications such as Intelligent Airbag Control and Robot Assistance in Surgery are introduced. The paper concludes with the current research results in the above mentioned fields.

  16. Measuring the Visual Salience of 3D Printed Objects.

    PubMed

    Wang, Xi; Lindlbauer, David; Lessig, Christian; Maertens, Marianne; Alexa, Marc

    2016-01-01

    To investigate human viewing behavior on physical realizations of 3D objects, the authors use an eye tracker with scene camera and fiducial markers on 3D objects to gather fixations on the presented stimuli. They use this data to validate assumptions regarding visual saliency that so far have experimentally only been analyzed for flat stimuli. They provide a way to compare fixation sequences from different subjects and developed a model for generating test sequences of fixations unrelated to the stimuli. Their results suggest that human observers agree in their fixations for the same object under similar viewing conditions. They also developed a simple procedure to validate computational models for visual saliency of 3D objects and found that popular models of mesh saliency based on center surround patterns fail to predict fixations.

  17. A 3-D measurement system using object-oriented FORTH

    SciTech Connect

    Butterfield, K.B.

    1989-01-01

    Discussed is a system for storing 3-D measurements of points that relates the coordinate system of the measurement device to the global coordinate system. The program described here used object-oriented FORTH to store the measured points as sons of the measuring device location. Conversion of local coordinates to absolute coordinates is performed by passing messages to the point objects. Modifications to the object-oriented FORTH system are also described. 1 ref.

  18. Algorithms for Haptic Rendering of 3D Objects

    NASA Technical Reports Server (NTRS)

    Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam

    2003-01-01

    Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).

  19. Segmentation of 3D objects using live wire

    NASA Astrophysics Data System (ADS)

    Falcao, Alexandre X.; Udupa, Jayaram K.

    1997-04-01

    We have been developing user-steered image segmentation methods for situations which require considerable user assistance in object definition. In such situations, our segmentation methods aim (1) to provide effective control to the user on the segmentation process while it is being executed and (2) to minimize the total user's time required in the process. In the past, we have presented two paradigms, referred to as live wire and live lane, for segmenting 3D/4D object boundaries in a slice-by-slice fashion. In this paper, we introduce a 3D extension of the live wire approach which can further reduce the time spent by the user in the segmentation process. In 2D live wire, given a slice, for two specified points (pixel vertices) on the boundary of the object, the best boundary segment (as a set of oriented pixel edges) is the minimum-cost path between the two points. This segment is found via dynamic programming in real time as the user anchors the first point and moves the cursor to indicate the second point. A complete 2D boundary in this slice is identified as a set of consecutive boundary segments forming a 'closed,' 'connected,' 'oriented' contour. The strategy of the 3D extension is that, first, users specify contours via live- wiring on a few orthogonal slices. If these slices are selected strategically, then we have a sufficient number of points on the 3D boundary of the object to do live-wiring automatically on all axial slices of the 3D scene. Based on several validation studies involving segmentation of the bones of the foot in MR images, we found that the 3D extension of live wire is statistically significantly (p less than 0.0001) more repeatable and 2 - 6 times faster (p less than 0.01) than the 2D live wire method and 3 - 15 times faster than manual tracing.

  20. 3-d interpolation in object perception: evidence from an objective performance paradigm.

    PubMed

    Kellman, Philip J; Garrigan, Patrick; Shipley, Thomas F; Yin, Carol; Machado, Liana

    2005-06-01

    Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D interpolation and tested a new theory of 3-D contour interpolation, termed 3-D relatability. The theory indicates for a given edge which orientations and positions of other edges in space may be connected to it by interpolation. Results of 5 experiments showed that processing of orientation relations in 3-D relatable displays was superior to processing in 3-D nonrelatable displays and that these effects depended on object formation. 3-D interpolation and 3-D relatabilty are discussed in terms of their implications for computational and neural models of object perception, which have typically been based on 2-D-orientation-sensitive units.

  1. Objective breast symmetry evaluation using 3-D surface imaging.

    PubMed

    Eder, Maximilian; Waldenfels, Fee V; Swobodnik, Alexandra; Klöppel, Markus; Pape, Ann-Kathrin; Schuster, Tibor; Raith, Stefan; Kitzler, Elena; Papadopulos, Nikolaos A; Machens, Hans-Günther; Kovacs, Laszlo

    2012-04-01

    This study develops an objective breast symmetry evaluation using 3-D surface imaging (Konica-Minolta V910(®) scanner) by superimposing the mirrored left breast over the right and objectively determining the mean 3-D contour difference between the 2 breast surfaces. 3 observers analyzed the evaluation protocol precision using 2 dummy models (n = 60), 10 test subjects (n = 300), clinically tested it on 30 patients (n = 900) and compared it to established 2-D measurements on 23 breast reconstructive patients using the BCCT.core software (n = 690). Mean 3-D evaluation precision, expressed as the coefficient of variation (VC), was 3.54 ± 0.18 for all human subjects without significant intra- and inter-observer differences (p > 0.05). The 3-D breast symmetry evaluation is observer independent, significantly more precise (p < 0.001) than the BCCT.core software (VC = 6.92 ± 0.88) and may play a part in an objective surgical outcome analysis after incorporation into clinical practice.

  2. 3-D Object Recognition from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  3. Object 3D surface reconstruction approach using portable laser scanner

    NASA Astrophysics Data System (ADS)

    Xu, Ning; Zhang, Wei; Zhu, Liye; Li, Changqing; Wang, Shifeng

    2017-06-01

    The environment perception plays the key role for a robot system. The 3D surface of the objects can provide essential information for the robot to recognize objects. This paper present an approach to reconstruct objects' 3D surfaces using a portable laser scanner we designed which consists of a single-layer laser scanner, an encoder, a motor, power supply and mechanical components. The captured point cloud data is processed to remove the discrete points, denoise filtering, stitching and registration. Then the triangular mesh generation of point cloud is accomplished by using Gaussian bilateral filtering, ICP real-time registration and greedy triangle projection algorithm. The experiment result shows the feasibility of the device designed and the algorithm proposed.

  4. Augmented Reality vs Virtual Reality for 3D Object Manipulation.

    PubMed

    Krichenbauer, Max; Yamamoto, Goshiro; Taketomi, Takafumi; Sandor, Christian; Kato, Hirokazu

    2017-01-25

    Virtual Reality (VR) Head-Mounted Displays (HMDs) are on the verge of becoming commodity hardware available to the average user and feasible to use as a tool for 3D work. Some HMDs include front-facing cameras, enabling Augmented Reality (AR) functionality. Apart from avoiding collisions with the environment, interaction with virtual objects may also be affected by seeing the real environment. However, whether these effects are positive or negative has not yet been studied extensively. For most tasks it is unknown whether AR has any advantage over VR. In this work we present the results of a user study in which we compared user performance measured in task completion time on a 9 degrees of freedom object selection and transformation task performed either in AR or VR, both with a 3D input device and a mouse. Our results show faster task completion time in AR over VR. When using a 3D input device, a purely VR environment increased task completion time by 22.5% on average compared to AR (p < 0:024). Surprisingly, a similar effect occurred when using a mouse: users were about 17.3% slower in VR than in AR (p < 0:04). Mouse and 3D input device produced similar task completion times in each condition (AR or VR) respectively. We further found no differences in reported comfort.

  5. Laser embedding electronics on 3D printed objects

    NASA Astrophysics Data System (ADS)

    Kirleis, Matthew A.; Simonson, Duane; Charipar, Nicholas A.; Kim, Heungsoo; Charipar, Kristin M.; Auyeung, Ray C. Y.; Mathews, Scott A.; Piqué, Alberto

    2014-03-01

    Additive manufacturing techniques such as 3D printing are able to generate reproductions of a part in free space without the use of molds; however, the objects produced lack electrical functionality from an applications perspective. At the same time, techniques such as inkjet and laser direct-write (LDW) can be used to print electronic components and connections onto already existing objects, but are not capable of generating a full object on their own. The approach missing to date is the combination of 3D printing processes with direct-write of electronic circuits. Among the numerous direct write techniques available, LDW offers unique advantages and capabilities given its compatibility with a wide range of materials, surface chemistries and surface morphologies. The Naval Research Laboratory (NRL) has developed various LDW processes ranging from the non-phase transformative direct printing of complex suspensions or inks to lase-and-place for embedding entire semiconductor devices. These processes have been demonstrated in digital manufacturing of a wide variety of microelectronic elements ranging from circuit components such as electrical interconnects and passives to antennas, sensors, actuators and power sources. At NRL we are investigating the combination of LDW with 3D printing to demonstrate the digital fabrication of functional parts, such as 3D circuits. Merging these techniques will make possible the development of a new generation of structures capable of detecting, processing, communicating and interacting with their surroundings in ways never imagined before. This paper shows the latest results achieved at NRL in this area, describing the various approaches developed for generating 3D printed electronics with LDW.

  6. Medical image retrieval system using multiple features from 3D ROIs

    NASA Astrophysics Data System (ADS)

    Lu, Hongbing; Wang, Weiwei; Liao, Qimei; Zhang, Guopeng; Zhou, Zhiming

    2012-02-01

    Compared to a retrieval using global image features, features extracted from regions of interest (ROIs) that reflect distribution patterns of abnormalities would benefit more for content-based medical image retrieval (CBMIR) systems. Currently, most CBMIR systems have been designed for 2D ROIs, which cannot reflect 3D anatomical features and region distribution of lesions comprehensively. To further improve the accuracy of image retrieval, we proposed a retrieval method with 3D features including both geometric features such as Shape Index (SI) and Curvedness (CV) and texture features derived from 3D Gray Level Co-occurrence Matrix, which were extracted from 3D ROIs, based on our previous 2D medical images retrieval system. The system was evaluated with 20 volume CT datasets for colon polyp detection. Preliminary experiments indicated that the integration of morphological features with texture features could improve retrieval performance greatly. The retrieval result using features extracted from 3D ROIs accorded better with the diagnosis from optical colonoscopy than that based on features from 2D ROIs. With the test database of images, the average accuracy rate for 3D retrieval method was 76.6%, indicating its potential value in clinical application.

  7. 3D Object Recognition: Symmetry and Virtual Views

    DTIC Science & Technology

    1992-12-01

    NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATIONI Artificial Intelligence Laboratory REPORT NUMBER 545 Technology Square AIM 1409 Cambridge... ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING A.I. Memo No. 1409 December 1992 C.B.C.L. Paper No. 76 3D Object...research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences, and at the Artificial

  8. Lagrangian 3D tracking of fluorescent microscopic objects in motion.

    PubMed

    Darnige, T; Figueroa-Morales, N; Bohec, P; Lindner, A; Clément, E

    2017-05-01

    We describe the development of a tracking device, mounted on an epi-fluorescent inverted microscope, suited to obtain time resolved 3D Lagrangian tracks of fluorescent passive or active micro-objects in microfluidic devices. The system is based on real-time image processing, determining the displacement of a x, y mechanical stage to keep the chosen object at a fixed position in the observation frame. The z displacement is based on the refocusing of the fluorescent object determining the displacement of a piezo mover keeping the moving object in focus. Track coordinates of the object with respect to the microfluidic device as well as images of the object are obtained at a frequency of several tenths of Hertz. This device is particularly well adapted to obtain trajectories of motile micro-organisms in microfluidic devices with or without flow.

  9. The Visual Priming of Motion-Defined 3D Objects.

    PubMed

    Jiang, Xiong; Jiang, Yang; Parasuraman, Raja

    2015-01-01

    The perception of a stimulus can be influenced by previous perceptual experience, a phenomenon known as perceptual priming. However, there has been limited investigation on perceptual priming of shape perception of three-dimensional object structures defined by moving dots. Here we examined the perceptual priming of a 3D object shape defined purely by motion-in-depth cues (i.e., Shape-From-Motion, SFM) using a classic prime-target paradigm. The results from the first two experiments revealed a significant increase in accuracy when a "cloudy" SFM stimulus (whose object structure was difficult to recognize due to the presence of strong noise) was preceded by an unambiguous SFM that clearly defined the same transparent 3D shape. In contrast, results from Experiment 3 revealed no change in accuracy when a "cloudy" SFM stimulus was preceded by a static shape or a semantic word that defined the same object shape. Instead, there was a significant decrease in accuracy when preceded by a static shape or a semantic word that defined a different object shape. These results suggested that the perception of a noisy SFM stimulus can be facilitated by a preceding unambiguous SFM stimulus--but not a static image or a semantic stimulus--that defined the same shape. The potential neural and computational mechanisms underlying the difference in priming are discussed.

  10. The Visual Priming of Motion-Defined 3D Objects

    PubMed Central

    Jiang, Xiong; Jiang, Yang

    2015-01-01

    The perception of a stimulus can be influenced by previous perceptual experience, a phenomenon known as perceptual priming. However, there has been limited investigation on perceptual priming of shape perception of three-dimensional object structures defined by moving dots. Here we examined the perceptual priming of a 3D object shape defined purely by motion-in-depth cues (i.e., Shape-From-Motion, SFM) using a classic prime-target paradigm. The results from the first two experiments revealed a significant increase in accuracy when a “cloudy” SFM stimulus (whose object structure was difficult to recognize due to the presence of strong noise) was preceded by an unambiguous SFM that clearly defined the same transparent 3D shape. In contrast, results from Experiment 3 revealed no change in accuracy when a “cloudy” SFM stimulus was preceded by a static shape or a semantic word that defined the same object shape. Instead, there was a significant decrease in accuracy when preceded by a static shape or a semantic word that defined a different object shape. These results suggested that the perception of a noisy SFM stimulus can be facilitated by a preceding unambiguous SFM stimulus—but not a static image or a semantic stimulus—that defined the same shape. The potential neural and computational mechanisms underlying the difference in priming are discussed. PMID:26658496

  11. Scale Space Graph Representation and Kernel Matching for Non Rigid and Textured 3D Shape Retrieval.

    PubMed

    Garro, Valeria; Giachetti, Andrea

    2016-06-01

    In this paper we introduce a novel framework for 3D object retrieval that relies on tree-based shape representations (TreeSha) derived from the analysis of the scale-space of the Auto Diffusion Function (ADF) and on specialized graph kernels designed for their comparison. By coupling maxima of the Auto Diffusion Function with the related basins of attraction, we can link the information at different scales encoding spatial relationships in a graph description that is isometry invariant and can easily incorporate texture and additional geometrical information as node and edge features. Using custom graph kernels it is then possible to estimate shape dissimilarities adapted to different specific tasks and on different categories of models, making the procedure a powerful and flexible tool for shape recognition and retrieval. Experimental results demonstrate that the method can provide retrieval scores similar or better than state-of-the-art on textured and non textured shape retrieval benchmarks and give interesting insights on effectiveness of different shape descriptors and graph kernels.

  12. CAD-based 3D object representation for robot vision

    SciTech Connect

    Bhanu, B.; Ho, C.C.

    1987-08-01

    This article explains that most existing vision systems rely on models generated in an ad hoc manner and have no explicit relation to the CAD/CAM system originally used to design and manufacture these objects. The authors desire a more unified system that allows vision models to be automatically generated from an existing CAD database. A CAD system contains an interactive design interface, graphic display utilities, model analysis tools, automatic manufacturing interfaces, etc. Although it is a suitable environment for design purposes, its representations and the models it generates do not contain all the features that are important in robot vision applications. In this article, the authors propose a CAD-based approach for building representations and models that can be used in diverse applications involving 3D object recognition and manipulation. There are two main steps in using this approach. First, they design the object's geometry using a CAD system, or extract its CAD model from the existing database if it has already been modeled. Second, they develop representations from the CAD model and construct features possibly by combining multiple representations that are crucial in 3D object recognition and manipulation.

  13. An inverse method to retrieve 3D radar reflectivity composites

    NASA Astrophysics Data System (ADS)

    Roca-Sancho, Jordi; Berenguer, Marc; Sempere-Torres, Daniel

    2014-11-01

    Dense radar networks offer the possibility of getting better Quantitative Precipitation Estimates (QPE) than those obtained with individual radars, as they allow increasing the coverage and improving quality of rainfall estimates in overlapping areas. Well-known sources of error such as attenuation by intense rainfall or errors associated with range can be mitigated through radar composites. Many compositing techniques are devoted to operational uses and do not exploit all the information that the network is providing. In this work an inverse method to obtain high-resolution radar reflectivity composites is presented. The method uses a model of radar sampling of the atmosphere that accounts for path attenuation and radar measurement geometry. Two significantly different rainfall situations are used to show detailed results of the proposed inverse method in comparison to other existing methodologies. A quantitative evaluation is carried out in a 12 h-event using two independent sources of information: a radar not involved in the composition process and a raingauge network. The proposed inverse method shows better performance in retrieving high reflectivity values and reproducing variability at convective scales than existing methods.

  14. Exploring local regularities for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Tian, Huaiwen; Qin, Shengfeng

    2016-11-01

    In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.

  15. Divided attention limits perception of 3-D object shapes.

    PubMed

    Scharff, Alec; Palmer, John; Moore, Cathleen M

    2013-02-12

    Can one perceive multiple object shapes at once? We tested two benchmark models of object shape perception under divided attention: an unlimited-capacity and a fixed-capacity model. Under unlimited-capacity models, shapes are analyzed independently and in parallel. Under fixed-capacity models, shapes are processed at a fixed rate (as in a serial model). To distinguish these models, we compared conditions in which observers were presented with simultaneous or sequential presentations of a fixed number of objects (The extended simultaneous-sequential method: Scharff, Palmer, & Moore, 2011a, 2011b). We used novel physical objects as stimuli, minimizing the role of semantic categorization in the task. Observers searched for a specific object among similar objects. We ensured that non-shape stimulus properties such as color and texture could not be used to complete the task. Unpredictable viewing angles were used to preclude image-matching strategies. The results rejected unlimited-capacity models for object shape perception and were consistent with the predictions of a fixed-capacity model. In contrast, a task that required observers to recognize 2-D shapes with predictable viewing angles yielded an unlimited capacity result. Further experiments ruled out alternative explanations for the capacity limit, leading us to conclude that there is a fixed-capacity limit on the ability to perceive 3-D object shapes.

  16. Fully automatic 3D digitization of unknown objects

    NASA Astrophysics Data System (ADS)

    Rozenwald, Gabriel F.; Seulin, Ralph; Fougerolle, Yohan D.

    2010-01-01

    This paper presents a complete system for 3D digitization of objects assuming no prior knowledge on its shape. The proposed methodology is applied to a digitization cell composed of a fringe projection scanner head, a robotic arm with 6 degrees of freedom (DoF), and a turntable. A two-step approach is used to automatically guide the scanning process. The first step uses the concept of Mass Vector Chains (MVC) to perform an initial scanning. The second step directs the scanner to remaining holes of the model. Post-processing of the data is also addressed. Tests with real objects were performed and results of digitization length in time and number of views are provided along with estimated surface coverage.

  17. Image segmentation to inspect 3-D object sizes

    NASA Astrophysics Data System (ADS)

    Hsu, Jui-Pin; Fuh, Chiou-Shann

    1996-01-01

    Object size inspection is an important task and has various applications in computer vision. For example, the automatic control of stone-breaking machines, which perform better if the sizes of the stones to be broken can be predicted. An algorithm is proposed for image segmentation in size inspection for almost round stones with high or low texture. Although our experiments are focused on stones, the algorithm can be applied to other 3-D objects. We use one fixed camera and four light sources at four different positions one at a time, to take four images. Then we compute the image differences and binarize them to extract edges. We explain, step by step, the photographing, the edge extraction, the noise removal, and the edge gap filling. Experimental results are presented.

  18. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  19. Large distance 3D imaging of hidden objects

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  20. Object-Oriented Approach for 3d Archaeological Documentation

    NASA Astrophysics Data System (ADS)

    Valente, R.; Brumana, R.; Oreni, D.; Banfi, F.; Barazzetti, L.; Previtali, M.

    2017-08-01

    Documentation on archaeological fieldworks needs to be accurate and time-effective. Many features unveiled during excavations can be recorded just once, since the archaeological workflow physically removes most of the stratigraphic elements. Some of them have peculiar characteristics which make them hardly recognizable as objects and prevent a full 3D documentation. The paper presents a suitable feature-based method to carry on archaeological documentation with a three-dimensional approach, tested on the archaeological site of S. Calocero in Albenga (Italy). The method is based on one hand on the use of structure from motion techniques for on-site recording and 3D Modelling to represent the three-dimensional complexity of stratigraphy. The entire documentation workflow is carried out through digital tools, assuring better accuracy and interoperability. Outputs can be used in GIS to perform spatial analysis; moreover, a more effective dissemination of fieldworks results can be assured with the spreading of datasets and other information through web-services.

  1. Additive manufacturing. Continuous liquid interface production of 3D objects.

    PubMed

    Tumbleston, John R; Shirvanyants, David; Ermoshkin, Nikita; Janusziewicz, Rima; Johnson, Ashley R; Kelly, David; Chen, Kai; Pinschmidt, Robert; Rolland, Jason P; Ermoshkin, Alexander; Samulski, Edward T; DeSimone, Joseph M

    2015-03-20

    Additive manufacturing processes such as 3D printing use time-consuming, stepwise layer-by-layer approaches to object fabrication. We demonstrate the continuous generation of monolithic polymeric parts up to tens of centimeters in size with feature resolution below 100 micrometers. Continuous liquid interface production is achieved with an oxygen-permeable window below the ultraviolet image projection plane, which creates a "dead zone" (persistent liquid interface) where photopolymerization is inhibited between the window and the polymerizing part. We delineate critical control parameters and show that complex solid parts can be drawn out of the resin at rates of hundreds of millimeters per hour. These print speeds allow parts to be produced in minutes instead of hours.

  2. Optical 3D sensor for large objects in industrial application

    NASA Astrophysics Data System (ADS)

    Kuhmstedt, Peter; Heinze, Matthias; Himmelreich, Michael; Brauer-Burchardt, Christian; Brakhage, Peter; Notni, Gunther

    2005-06-01

    A new self calibrating optical 3D measurement system using fringe projection technique named "kolibri 1500" is presented. It can be utilised to acquire the all around shape of large objects. The basic measuring principle is the phasogrammetric approach introduced by the authors /1, 2/. The "kolibri 1500" consists of a stationary system with a translation unit for handling of objects. Automatic whole body measurement is achieved by using sensor head rotation and changeable object position, which can be done completely computer controlled. Multi-view measurement is realised by using the concept of virtual reference points. In this way no matching procedures or markers are necessary for the registration of the different images. This makes the system very flexible to realise different measurement tasks. Furthermore, due to self calibrating principle mechanical alterations are compensated. Typical parameters of the system are: the measurement volume extends from 400 mm up to 1500 mm max. length, the measurement time is between 2 min for 12 images up to 20 min for 36 images and the measurement accuracy is below 50μm.The flexibility makes the measurement system useful for a wide range of applications such as quality control, rapid prototyping, design and CAD/CAM which will be shown in the paper.

  3. Investigating the Bag-of-Words Method for 3D Shape Retrieval

    NASA Astrophysics Data System (ADS)

    Li, Xiaolan; Godil, Afzal

    2010-12-01

    This paper investigates the capabilities of the Bag-of-Words (BWs) method in the 3D shape retrieval field. The contributions of this paper are (1) the 3D shape retrieval task is categorized from different points of view: specific versus generic, partial-to-global retrieval (PGR) versus global-to-global retrieval (GGR), and articulated versus nonarticulated (2) the spatial information, represented as concentric spheres, is integrated into the framework to improve the discriminative ability (3) the analysis of the experimental results on Purdue Engineering Benchmark (PEB) reveals that some properties of the BW approach make it perform better on the PGR task than the GGR task (4) the BW approach is evaluated on nonarticulated database PEB and articulated database McGill Shape Benchmark (MSB) and compared to other methods.

  4. Robust feature detection for 3D object recognition and matching

    NASA Astrophysics Data System (ADS)

    Pankanti, Sharath; Dorai, Chitra; Jain, Anil K.

    1993-06-01

    Salient surface features play a central role in tasks related to 3-D object recognition and matching. There is a large body of psychophysical evidence demonstrating the perceptual significance of surface features such as local minima of principal curvatures in the decomposition of objects into a hierarchy of parts. Many recognition strategies employed in machine vision also directly use features derived from surface properties for matching. Hence, it is important to develop techniques that detect surface features reliably. Our proposed scheme consists of (1) a preprocessing stage, (2) a feature detection stage, and (3) a feature integration stage. The preprocessing step selectively smoothes out noise in the depth data without degrading salient surface details and permits reliable local estimation of the surface features. The feature detection stage detects both edge-based and region-based features, of which many are derived from curvature estimates. The third stage is responsible for integrating the information provided by the individual feature detectors. This stage also completes the partial boundaries provided by the individual feature detectors, using proximity and continuity principles of Gestalt. All our algorithms use local support and, therefore, are inherently parallelizable. We demonstrate the efficacy and robustness of our approach by applying it to two diverse domains of applications: (1) segmentation of objects into volumetric primitives and (2) detection of salient contours on free-form surfaces. We have tested our algorithms on a number of real range images with varying degrees of noise and missing data due to self-occlusion. The preliminary results are very encouraging.

  5. 3-D Interpolation in Object Perception: Evidence from an Objective Performance Paradigm

    ERIC Educational Resources Information Center

    Kellman, Philip J.; Garrigan, Patrick; Shipley, Thomas F.; Yin, Carol; Machado, Liana

    2005-01-01

    Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D…

  6. 3-D Interpolation in Object Perception: Evidence from an Objective Performance Paradigm

    ERIC Educational Resources Information Center

    Kellman, Philip J.; Garrigan, Patrick; Shipley, Thomas F.; Yin, Carol; Machado, Liana

    2005-01-01

    Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D…

  7. Object Segmentation and Ground Truth in 3D Embryonic Imaging

    PubMed Central

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  8. Object Segmentation and Ground Truth in 3D Embryonic Imaging.

    PubMed

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.

  9. Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Chen; Lin, Chao-Hung

    2016-06-01

    With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority

  10. Reconstruction and 3D visualisation based on objective real 3D based documentation.

    PubMed

    Bolliger, Michael J; Buck, Ursula; Thali, Michael J; Bolliger, Stephan A

    2012-09-01

    Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (computer tomography, magnetic resonance imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image.

  11. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    NASA Astrophysics Data System (ADS)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  12. Phase retrieval and 3D imaging in gold nanoparticles based fluorescence microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ilovitsh, Tali; Ilovitsh, Asaf; Weiss, Aryeh M.; Meir, Rinat; Zalevsky, Zeev

    2017-02-01

    Optical sectioning microscopy can provide highly detailed three dimensional (3D) images of biological samples. However, it requires acquisition of many images per volume, and is therefore time consuming, and may not be suitable for live cell 3D imaging. We propose the use of the modified Gerchberg-Saxton phase retrieval algorithm to enable full 3D imaging of gold nanoparticles tagged sample using only two images. The reconstructed field is free space propagated to all other focus planes using post processing, and the 2D z-stack is merged to create a 3D image of the sample with high fidelity. Because we propose to apply the phase retrieving on nano particles, the regular ambiguities typical to the Gerchberg-Saxton algorithm, are eliminated. The proposed concept is then further enhanced also for tracking of single fluorescent particles within a three dimensional (3D) cellular environment based on image processing algorithms that can significantly increases localization accuracy of the 3D point spread function in respect to regular Gaussian fitting. All proposed concepts are validated both on simulated data as well as experimentally.

  13. 3D genome structure modeling by Lorentzian objective function.

    PubMed

    Trieu, Tuan; Cheng, Jianlin

    2017-02-17

    The 3D structure of the genome plays a vital role in biological processes such as gene interaction, gene regulation, DNA replication and genome methylation. Advanced chromosomal conformation capture techniques, such as Hi-C and tethered conformation capture, can generate chromosomal contact data that can be used to computationally reconstruct 3D structures of the genome. We developed a novel restraint-based method that is capable of reconstructing 3D genome structures utilizing both intra-and inter-chromosomal contact data. Our method was robust to noise and performed well in comparison with a panel of existing methods on a controlled simulated data set. On a real Hi-C data set of the human genome, our method produced chromosome and genome structures that are consistent with 3D FISH data and known knowledge about the human chromosome and genome, such as, chromosome territories and the cluster of small chromosomes in the nucleus center with the exception of the chromosome 18. The tool and experimental data are available at https://missouri.box.com/v/LorDG.

  14. 3D genome structure modeling by Lorentzian objective function.

    PubMed

    Trieu, Tuan; Cheng, Jianlin

    2016-11-29

    The 3D structure of the genome plays a vital role in biological processes such as gene interaction, gene regulation, DNA replication and genome methylation. Advanced chromosomal conformation capture techniques, such as Hi-C and tethered conformation capture, can generate chromosomal contact data that can be used to computationally reconstruct 3D structures of the genome. We developed a novel restraint-based method that is capable of reconstructing 3D genome structures utilizing both intra-and inter-chromosomal contact data. Our method was robust to noise and performed well in comparison with a panel of existing methods on a controlled simulated data set. On a real Hi-C data set of the human genome, our method produced chromosome and genome structures that are consistent with 3D FISH data and known knowledge about the human chromosome and genome, such as, chromosome territories and the cluster of small chromosomes in the nucleus center with the exception of the chromosome 18. The tool and experimental data are available at https://missouri.box.com/v/LorDG.

  15. Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

    PubMed Central

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454

  16. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation.

    PubMed

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-12-12

    Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.

  17. Influence of 3D Radiative Effects on Satellite Retrievals of Cloud Properties

    NASA Technical Reports Server (NTRS)

    Varnai, Tamas; Marshak, Alexander; Einaudi, Franco (Technical Monitor)

    2001-01-01

    When cloud properties are retrieved from satellite observations, the calculations apply 1D theory to the 3D world: they only consider vertical structures and ignore horizontal cloud variability. This presentation discusses how big the resulting errors can be in the operational retrievals of cloud optical thickness. A new technique was developed to estimate the magnitude of potential errors by analyzing the spatial patterns of visible and infrared images. The proposed technique was used to set error bars for optical depths retrieved from new MODIS measurements. Initial results indicate that the 1 km resolution retrievals are subject to abundant uncertainties. Averaging over 50 by 50 km areas reduces the errors, but does not remove them completely; even in the relatively simple case of high sun (30 degree zenith angle), about a fifth of the examined areas had biases larger than ten percent. As expected, errors increase substantially for more oblique illumination.

  18. Retrieval of cloud microphysical parameters from INSAT-3D: a feasibility study using radiative transfer simulations

    NASA Astrophysics Data System (ADS)

    Jinya, John; Bipasha, Paul S.

    2016-05-01

    Clouds strongly modulate the Earths energy balance and its atmosphere through their interaction with the solar and terrestrial radiation. They interact with radiation in various ways like scattering, emission and absorption. By observing these changes in radiation at different wavelength, cloud properties can be estimated. Cloud properties are of utmost importance in studying different weather and climate phenomena. At present, no satellite provides cloud microphysical parameters over the Indian region with high temporal resolution. INSAT-3D imager observations in 6 spectral channels from geostationary platform offer opportunity to study continuous cloud properties over Indian region. Visible (0.65 μm) and shortwave-infrared (1.67 μm) channel radiances can be used to retrieve cloud microphysical parameters such as cloud optical thickness (COT) and cloud effective radius (CER). In this paper, we have carried out a feasibility study with the objective of cloud microphysics retrieval. For this, an inter-comparison of 15 globally available radiative transfer models (RTM) were carried out with the aim of generating a Look-up- Table (LUT). SBDART model was chosen for the simulations. The sensitivity of each spectral channel to different cloud properties was investigated. The inputs to the RT model were configured over our study region (50°S - 50°N and 20°E - 130°E) and a large number of simulations were carried out using random input vectors to generate the LUT. The determination of cloud optical thickness and cloud effective radius from spectral reflectance measurements constitutes the inverse problem and is typically solved by comparing the measured reflectances with entries in LUT and searching for the combination of COT and CER that gives the best fit. The products are available on the website www.mosdac.gov.in

  19. Extension of RCC Topological Relations for 3d Complex Objects Components Extracted from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Xing, Xu-Feng; Abolfazl Mostafavia, Mir; Wang, Chen

    2016-06-01

    Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in R3. Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in R3. We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.

  20. Holographic imaging of 3D objects on dichromated polymer systems

    NASA Astrophysics Data System (ADS)

    Lemelin, Guylain; Jourdain, Anne; Manivannan, Gurusamy; Lessard, Roger A.

    1996-01-01

    Conventional volume transmission holograms of a 3D scene were recorded on dichromated poly(acrylic acid) (DCPAA) films under 488 nm light. The holographic characterization and quality of reconstruction have been studied by varying the influencing parameters such as concentration of dichromate and electron donor, and the molecular weight of the polymer matrix. Ammonium and potassium dichromate have been employed to sensitize the poly(acrylic) matrix. the recorded hologram can be efficiently reconstructed either with red light or with low energy in the blue region without any post thermal or chemical processing.

  1. Object-oriented urban 3D spatial data model organization method

    NASA Astrophysics Data System (ADS)

    Li, Jing-wen; Li, Wen-qing; Lv, Nan; Su, Tao

    2015-12-01

    This paper combined the 3d data model with object-oriented organization method, put forward the model of 3d data based on object-oriented method, implemented the city 3d model to quickly build logical semantic expression and model, solved the city 3d spatial information representation problem of the same location with multiple property and the same property with multiple locations, designed the space object structure of point, line, polygon, body for city of 3d spatial database, and provided a new thought and method for the city 3d GIS model and organization management.

  2. Affordance-based 3D feature for generic object recognition

    NASA Astrophysics Data System (ADS)

    Iizuka, M.; Akizuki, S.; Hashimoto, M.

    2017-03-01

    Techniques for generic object recognition, which targets everyday objects such as cups and spoons, and techniques for approach vector estimation (e.g. estimating grasp position), which are needed for carrying out tasks involving everyday objects, are considered necessary for the perceptual system of service robots. In this research, we design feature for generic object recognition so they can also be applied to approach vector estimation. To carry out tasks involving everyday objects, estimating the function of the target object is critical. Also, as the function of holding liquid is found in all cups, so a function is shared in each type (class) of everyday objects. We thus propose a generic object recognition method that can estimate the approach vector by expressing an object's function as feature. In a test of the generic object recognition of everyday objects, we confirmed that our proposed method had a 92% recognition rate. This rate was 11% higher than the mainstream generic object recognition technique of using convolutional neural network (CNN).

  3. A Taxonomy of 3D Occluded Objects Recognition Techniques

    NASA Astrophysics Data System (ADS)

    Soleimanizadeh, Shiva; Mohamad, Dzulkifli; Saba, Tanzila; Al-ghamdi, Jarallah Saleh

    2016-03-01

    The overall performances of object recognition techniques under different condition (e.g., occlusion, viewpoint, and illumination) have been improved significantly in recent years. New applications and hardware are shifted towards digital photography, and digital media. This faces an increase in Internet usage requiring object recognition for certain applications; particularly occulded objects. However occlusion is still an issue unhandled, interlacing the relations between extracted feature points through image, research is going on to develop efficient techniques and easy to use algorithms that would help users to source images; this need to overcome problems and issues regarding occlusion. The aim of this research is to review recognition occluded objects algorithms and figure out their pros and cons to solve the occlusion problem features, which are extracted from occluded object to distinguish objects from other co-existing objects by determining the new techniques, which could differentiate the occluded fragment and sections inside an image.

  4. Frio, Yegua objectives of E. Texas 3D seismic

    SciTech Connect

    1996-07-01

    Houston companies plan to explore deeper formations along the Sabine River on the Texas and Louisiana Gulf Coast. PetroGuard Co. Inc. and Jebco Seismic Inc., Houston, jointly secured a seismic and leasing option from Hankamer family et al. on about 120 sq miles in Newton County, Tex., and Calcasieu Parish, La. PetroGuard, which specializes in oilfield rehabilitation, has production experience in the area. Historic production in the area spans three major geologic trends: Oligocene Frio/Hackberry, downdip and mid-dip Eocene Yegua, and Eocene Wilcox. In the southern part of the area, to be explored first, the trends lie at 9,000--10,000 ft, 10,000--12,000 ft, and 14,000--15,000 ft, respectively. Output Exploration Co., an affiliate of Input/Output Inc., Houston, acquired from PetroGuard and Jebco all exploratory drilling rights in the option area. Output will conduct 3D seismic operations over nearly half the acreage this summer. Data acquisition started late this spring. Output plans to use a combination of a traditional land recording system and I/O`s new RSR 24 bit radio telemetry system because the area spans environments from dry land to swamp.

  5. Prediction models from CAD models of 3D objects

    NASA Astrophysics Data System (ADS)

    Camps, Octavia I.

    1992-11-01

    In this paper we present a probabilistic prediction based approach for CAD-based object recognition. Given a CAD model of an object, the PREMIO system combines techniques of analytic graphics and physical models of lights and sensors to predict how features of the object will appear in images. In nearly 4,000 experiments on analytically-generated and real images, we show that in a semi-controlled environment, predicting the detectability of features of the image can successfully guide a search procedure to make informed choices of model and image features in its search for correspondences that can be used to hypothesize the pose of the object. Furthermore, we provide a rigorous experimental protocol that can be used to determine the optimal number of correspondences to seek so that the probability of failing to find a pose and of finding an inaccurate pose are minimized.

  6. 3-D Object Pose Determination Using Complex EGI

    DTIC Science & Technology

    1990-10-01

    IKEG1 ji = 0. . .. 12 4.1 Tesselated pentakis dodecahedron ..... ....................... 19 4.2 First composite object used for testing... dodecahedron (tesselated pentakis dodecahedron ) as shown in Fig. 4.1. The normal direction space is discretized into 240 cells as well. The CEGI weights are...deviation of the error distribution.) 18 Figure 4. 1: Tesselated pentakis dodecahedron Figure 4.2: First composite object used for testing 19 Figure

  7. Speckle size of light scattered from 3D rough objects.

    PubMed

    Zhang, Geng; Wu, Zhensen; Li, Yanhui

    2012-02-13

    From scalar Helmholtz integral relation and by coordinate system transformation, this paper begins with a derivation of the far-zone speckle field in the observation plane perpendicular to the scattering direction from an arbitrarily shaped conducting rough object illuminated by a plane wave illumination, followed by the spatial correlation function of the speckle intensity to obtain the speckle size from the objects. Especially, the specific expressions for the speckle sizes of light backscattered from spheres, cylinders and cones are obtained in detail showing that the speckle size along one direction in the observation plane is proportional to the incident wavelength and the distance between the object and the observation plane, and is inverse proportional to the maximal illuminated dimension of the object parallel to the direction. In addition, the shapes of the speckle of the rough objects with different shapes are different. The investigation on the speckle size in this paper will be useful for the statistical properties of speckle from complicated rough objects and the speckle imaging to target detection and identification.

  8. MODIS volcanic ash retrievals vs FALL3D transport model: a quantitative comparison

    NASA Astrophysics Data System (ADS)

    Corradini, S.; Merucci, L.; Folch, A.

    2010-12-01

    Satellite retrievals and transport models represents the key tools to monitor the volcanic clouds evolution. Because of the harming effects of fine ash particles on aircrafts, the real-time tracking and forecasting of volcanic clouds is key for aviation safety. Together with the security reasons also the economical consequences of a disruption of airports must be taken into account. The airport closures due to the recent Icelandic Eyjafjöll eruption caused millions of passengers to be stranded not only in Europe, but across the world. IATA (the International Air Transport Association) estimates that the worldwide airline industry has lost a total of about 2.5 billion of Euro during the disruption. Both security and economical issues require reliable and robust ash cloud retrievals and trajectory forecasting. The intercomparison between remote sensing and modeling is required to assure precise and reliable volcanic ash products. In this work we perform a quantitative comparison between Moderate Resolution Imaging Spectroradiometer (MODIS) retrievals of volcanic ash cloud mass and Aerosol Optical Depth (AOD) with the FALL3D ash dispersal model. MODIS, aboard the NASA-Terra and NASA-Aqua polar satellites, is a multispectral instrument with 36 spectral bands operating in the VIS-TIR spectral range and spatial resolution varying between 250 and 1000 m at nadir. The MODIS channels centered around 11 and 12 micron have been used for the ash retrievals through the Brightness Temperature Difference algorithm and MODTRAN simulations. FALL3D is a 3-D time-dependent Eulerian model for the transport and deposition of volcanic particles that outputs, among other variables, cloud column mass and AOD. Three MODIS images collected the October 28, 29 and 30 on Mt. Etna volcano during the 2002 eruption have been considered as test cases. The results show a general good agreement between the retrieved and the modeled volcanic clouds in the first 300 km from the vents. Even if the

  9. Towards a 3-D tomographic retrieval for the air-borne limb-imager GLORIA

    NASA Astrophysics Data System (ADS)

    Ungermann, J.; Kaufmann, M.; Hoffmann, L.; Preusse, P.; Oelhaf, H.; Friedl-Vallon, F.; Riese, M.

    2010-11-01

    GLORIA (Gimballed Limb Observer for Radiance Imaging of the Atmosphere) is a new remote sensing instrument essentially combining a Fourier transform infrared spectrometer with a two-dimensional (2-D) detector array in combination with a highly flexible gimbal mount. It will be housed in the belly pod of the German research aircraft HALO (High Altitude and Long Range Research Aircraft). It is unique in its combination of high spatial and state-of-the art spectral resolution. Furthermore, the horizontal view angle with respect to the aircraft flight direction can be varied from 45° to 135°. This allows for tomographic measurements of mesoscale events for a wide variety of atmospheric constituents. In this paper, a tomographic retrieval scheme is presented, which is able to fully exploit the manifold radiance observations of the GLORIA limb sounder. The algorithm is optimized for massive 3-D retrievals of several hundred thousands of measurements and atmospheric constituents on common hardware. The new scheme is used to explore the capabilities of GLORIA to sound the atmosphere in full 3-D with respect to the choice of the flightpath and to different measurement modes of the instrument using ozone as a test species. It is demonstrated that the achievable resolution should approach 200 m vertically and 20 km-30 km horizontally. Finally, a comparison of the 3-D inversion with conventional 1-D inversions using the assumption of a horizontally homogeneous atmosphere is performed.

  10. Towards a 3-D tomographic retrieval for the Air-borne Limb-imager GLORIA

    NASA Astrophysics Data System (ADS)

    Ungermann, J.; Kaufmann, M.; Hoffmann, L.; Preusse, P.; Oelhaf, H.; Friedl-Vallon, F.; Riese, M.

    2010-07-01

    GLORIA (Gimballed Limb Observer for Radiance Imaging of the Atmosphere) is a new remote sensing instrument essentially combining a Fourier transform infrared spectrometer with two two-dimensional (2-D) detector arrays in combination with a highly flexible gimbal mount. It will be housed in the belly pod of the German research aircraft HALO (High Altitude and Long Range Research Aircraft). It is unique in its high spatial and spectral resolution. Furthermore, the horizontal view angle with respect to the aircraft can be varied from 45° to 135°. This allows for tomographic measurements of mesoscale events for a wide variety of atmospheric constituents. In this paper, a fast tomographic retrieval scheme is presented, which is able to fully exploit the high-resolution radiance observations of the GLORIA limb sounder. The algorithm is optimized for massive 3-D retrievals of several hundred thousands of measurements and atmospheric constituents on common hardware. The new scheme is used to explore the capabilities of GLORIA to sound the atmosphere in full 3-D with respect to the choice of the flightpath and to different measurement modes of the instrument using ozone as a test species. It is demonstrated that the achievable resolution should approach 200 m vertically and 20 km-30 km horizontally. Finally, a comparison of the 3-D inversion with conventional 1-D inversions using the assumption of a horizontally homogeneous atmosphere is performed.

  11. BlastNeuron for Automated Comparison, Retrieval and Clustering of 3D Neuron Morphologies.

    PubMed

    Wan, Yinan; Long, Fuhui; Qu, Lei; Xiao, Hang; Hawrylycz, Michael; Myers, Eugene W; Peng, Hanchuan

    2015-10-01

    Characterizing the identity and types of neurons in the brain, as well as their associated function, requires a means of quantifying and comparing 3D neuron morphology. Presently, neuron comparison methods are based on statistics from neuronal morphology such as size and number of branches, which are not fully suitable for detecting local similarities and differences in the detailed structure. We developed BlastNeuron to compare neurons in terms of their global appearance, detailed arborization patterns, and topological similarity. BlastNeuron first compares and clusters 3D neuron reconstructions based on global morphology features and moment invariants, independent of their orientations, sizes, level of reconstruction and other variations. Subsequently, BlastNeuron performs local alignment between any pair of retrieved neurons via a tree-topology driven dynamic programming method. A 3D correspondence map can thus be generated at the resolution of single reconstruction nodes. We applied BlastNeuron to three datasets: (1) 10,000+ neuron reconstructions from a public morphology database, (2) 681 newly and manually reconstructed neurons, and (3) neurons reconstructions produced using several independent reconstruction methods. Our approach was able to accurately and efficiently retrieve morphologically and functionally similar neuron structures from large morphology database, identify the local common structures, and find clusters of neurons that share similarities in both morphology and molecular profiles.

  12. Steps toward a 3-D Tomographic Retrieval for the Air-borne Limb-imager GLORIA

    NASA Astrophysics Data System (ADS)

    Ungermann, Joern; Kaufmann, Martin; Hoffmann, Lars; Preusse, Peter; Riese, Martin

    GLORIA (Gimballed Limb Observer for Radiance Imaging of the Atmosphere) is a new remote sensing instrument using an infrared limb-imager with a 2-D detector array in combination with a highly flexible mounting unit. It will be housed in the belly pod of the German research airplane HALO (High Altitude and Long Range Research Aircraft). It is unique in its high spatial and spectral resolution and its ability to scan the line of sight 90 degrees horizontally. This allows for tomographic measurements of mesoscale events for a wide variety of atmospheric constituents. In this paper, a fast tomographic retrieval scheme is presented, which is able to fully exploit the high-resolution radiance observations of the GLORIA instrument. The algorithm is optimized for massive 3-D retrievals of several hundred thousands of measurements and atmospheric constituents on common hardware. The new scheme is used to explore the capabilities of GLORIA to sound the atmosphere in full 3-D with respect to the choice of the flight path and to different measurement modes of the instrument. Finally, a comparison of the 3-D inversion with conventional 1-D inversions using the assumption of a horizontally homogeneous atmosphere is performed.

  13. 4D reconstruction of the past: the image retrieval and 3D model construction pipeline

    NASA Astrophysics Data System (ADS)

    Hadjiprocopis, Andreas; Ioannides, Marinos; Wenzel, Konrad; Rothermel, Mathias; Johnsons, Paul S.; Fritsch, Dieter; Doulamis, Anastasios; Protopapadakis, Eftychios; Kyriakaki, Georgia; Makantasis, Kostas; Weinlinger, Guenther; Klein, Michael; Fellner, Dieter; Stork, Andre; Santos, Pedro

    2014-08-01

    One of the main characteristics of the Internet era we are living in, is the free and online availability of a huge amount of data. This data is of varied reliability and accuracy and exists in various forms and formats. Often, it is cross-referenced and linked to other data, forming a nexus of text, images, animation and audio enabled by hypertext and, recently, by the Web3.0 standard. Our main goal is to enable historians, architects, archaeolo- gists, urban planners and affiliated professionals to reconstruct views of historical monuments from thousands of images floating around the web. This paper aims to provide an update of our progress in designing and imple- menting a pipeline for searching, filtering and retrieving photographs from Open Access Image Repositories and social media sites and using these images to build accurate 3D models of archaeological monuments as well as enriching multimedia of cultural / archaeological interest with metadata and harvesting the end products to EU- ROPEANA. We provide details of how our implemented software searches and retrieves images of archaeological sites from Flickr and Picasa repositories as well as strategies on how to filter the results, on two levels; a) based on their built-in metadata including geo-location information and b) based on image processing and clustering techniques. We also describe our implementation of a Structure from Motion pipeline designed for producing 3D models using the large collection of 2D input images (>1000) retrieved from Internet Repositories.

  14. TBIdoc: 3D content-based CT image retrieval system for traumatic brain injury

    NASA Astrophysics Data System (ADS)

    Li, Shimiao; Gong, Tianxia; Wang, Jie; Liu, Ruizhe; Tan, Chew Lim; Leong, Tze Yun; Pang, Boon Chuan; Lim, C. C. Tchoyoson; Lee, Cheng Kiang; Tian, Qi; Zhang, Zhuo

    2010-03-01

    Traumatic brain injury (TBI) is a major cause of death and disability. Computed Tomography (CT) scan is widely used in the diagnosis of TBI. Nowadays, large amount of TBI CT data is stacked in the hospital radiology department. Such data and the associated patient information contain valuable information for clinical diagnosis and outcome prediction. However, current hospital database system does not provide an efficient and intuitive tool for doctors to search out cases relevant to the current study case. In this paper, we present the TBIdoc system: a content-based image retrieval (CBIR) system which works on the TBI CT images. In this web-based system, user can query by uploading CT image slices from one study, retrieval result is a list of TBI cases ranked according to their 3D visual similarity to the query case. Specifically, cases of TBI CT images often present diffuse or focal lesions. In TBIdoc system, these pathological image features are represented as bin-based binary feature vectors. We use the Jaccard-Needham measure as the similarity measurement. Based on these, we propose a 3D similarity measure for computing the similarity score between two series of CT slices. nDCG is used to evaluate the system performance, which shows the system produces satisfactory retrieval results. The system is expected to improve the current hospital data management in TBI and to give better support for the clinical decision-making process. It may also contribute to the computer-aided education in TBI.

  15. New neural-networks-based 3D object recognition system

    NASA Astrophysics Data System (ADS)

    Abolmaesumi, Purang; Jahed, M.

    1997-09-01

    Three-dimensional object recognition has always been one of the challenging fields in computer vision. In recent years, Ulman and Basri (1991) have proposed that this task can be done by using a database of 2-D views of the objects. The main problem in their proposed system is that the correspondent points should be known to interpolate the views. On the other hand, their system should have a supervisor to decide which class does the represented view belong to. In this paper, we propose a new momentum-Fourier descriptor that is invariant to scale, translation, and rotation. This descriptor provides the input feature vectors to our proposed system. By using the Dystal network, we show that the objects can be classified with over 95% precision. We have used this system to classify the objects like cube, cone, sphere, torus, and cylinder. Because of the nature of the Dystal network, this system reaches to its stable point by a single representation of the view to the system. This system can also classify the similar views to a single class (e.g., for the cube, the system generated 9 different classes for 50 different input views), which can be used to select an optimum database of training views. The system is also very flexible to the noise and deformed views.

  16. Recognizing 3-D Objects Using 2-D Images

    DTIC Science & Technology

    1993-05-01

    N00014-91-J-4038, Army contract number DACA76-85-C-0010, and under Office of Naval Research contract N00014-85-K-0124. 4 Contents 1 Introduction 9 1.1...Features ...... ..................... 89 3.3 Conclusions ......... ................................ 90 5 6 CONTENTS 4 Building a Practical Indexing...should be considered joint work between the author and David Clemens. CONTENTS T 8 Conclusions 251 8. 1 Ge eral Object Recogiiitioin

  17. Surface gloss and color perception of 3D objects

    PubMed Central

    Xiao, Bei; Brainard, David H.

    2008-01-01

    Two experiments explore the color perception of objects in complex scenes. The first experiment examines the color perception of objects across variation in surface gloss. Observers adjusted the color appearance of a matte sphere to match that of a test sphere. Across conditions we varied the body color and glossiness of the test sphere. The data indicate that observers do not simply match the average light reflected from the test. Indeed, the visual system compensates for the physical effect of varying the gloss, so that appearance is stabilized relative to what is predicted by the spatial average. The second experiment examines how people perceive color across locations on an object. We replaced the test sphere with a soccer ball that had one of its hexagonal faces colored. Observers were asked to adjust the match sphere have the same color appearance as this test patch. The test patch could be located at either an upper or lower location on the soccer ball. In addition, we varied the surface gloss of the entire soccer ball (including the test patch). The data show that there is an effect of test patch location on observers’ color matching, but this effect is small compared to the physical change in the average light reflected from the test patch across the two locations. In addition, the effect of glossy highlights on the color appearance of the test patch was consistent with the results from Experiment 1. PMID:18598406

  18. Easily retrievable objects among the NEO population

    NASA Astrophysics Data System (ADS)

    García Yárnoz, D.; Sanchez, J. P.; McInnes, C. R.

    2013-08-01

    Asteroids and comets are of strategic importance for science in an effort to understand the formation, evolution and composition of the Solar System. Near-Earth Objects (NEOs) are of particular interest because of their accessibility from Earth, but also because of their speculated wealth of material resources. The exploitation of these resources has long been discussed as a means to lower the cost of future space endeavours. In this paper, we consider the currently known NEO population and define a family of so-called Easily Retrievable Objects (EROs), objects that can be transported from accessible heliocentric orbits into the Earth's neighbourhood at affordable costs. The asteroid retrieval transfers are sought from the continuum of low energy transfers enabled by the dynamics of invariant manifolds; specifically, the retrieval transfers target planar, vertical Lyapunov and halo orbit families associated with the collinear equilibrium points of the Sun-Earth Circular Restricted Three Body problem. The judicious use of these dynamical features provides the best opportunity to find extremely low energy Earth transfers for asteroid material. A catalogue of asteroid retrieval candidates is then presented. Despite the highly incomplete census of very small asteroids, the ERO catalogue can already be populated with 12 different objects retrievable with less than 500 m/s of Δ v. Moreover, the approach proposed represents a robust search and ranking methodology for future retrieval candidates that can be automatically applied to the growing survey of NEOs.

  19. A primitive-based 3D object recognition system

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.

    1988-01-01

    An intermediate-level knowledge-based system for decomposing segmented data into three-dimensional primitives was developed to create an approximate three-dimensional description of the real world scene from a single two-dimensional perspective view. A knowledge-based approach was also developed for high-level primitive-based matching of three-dimensional objects. Both the intermediate-level decomposition and the high-level interpretation are based on the structural and relational matching; moreover, they are implemented in a frame-based environment.

  20. Comparison between volcanic ash satellite retrievals and FALL3D transport model

    NASA Astrophysics Data System (ADS)

    Corradini, Stefano; Merucci, Luca; Folch, Arnau

    2010-05-01

    Volcanic eruptions represent one of the most important sources of natural pollution because of the large emission of gas and solid particles into the atmosphere. Volcanic clouds can contain different gas species (mainly H2O, CO2, SO2 and HCl) and a mix of silicate-bearing ash particles in the size range from 0.1 μm to few mm. Determining the properties, movement and extent of volcanic ash clouds is an important scientific, economic, and public safety issue because of the harmful effects on environment, public health and aviation. In particular, real-time tracking and forecasting of volcanic clouds is key for aviation safety. Several encounters of en-route aircrafts with volcanic ash clouds have demonstrated the harming effects of fine ash particles on modern aircrafts. Alongside these considerations, the economical consequences caused by disruption of airports must be also taken into account. Both security and economical issues require robust and affordable ash cloud detection and trajectory forecasting, ideally combining remote sensing and modeling. We perform a quantitative comparison between Moderate Resolution Imaging Spectroradiometer (MODIS) retrievals of volcanic ash cloud mass and Aerosol Optical Depth (AOD) with the FALL3D ash dispersal model. MODIS, aboard the NASA-Terra and NASA-Aqua polar satellites, is a multispectral instrument with 36 spectral bands from Visible (VIS) to Thermal InfraRed (TIR) and spatial resolution varying between 250 and 1000 m at nadir. The MODIS channels centered around 11 and 12 mm have been used for the ash retrievals through the Brightness Temperature Difference algorithm and MODTRAN simulations. FALL3D is a 3-D time-dependent Eulerian model for the transport and deposition of volcanic particles that outputs, among other variables, cloud column mass and AOD. We consider the Mt. Etna volcano 2002 eruptive event as a test case. Results show a good agreement between the mean AOT retrieved and the spatial ash dispersion in the

  1. Object Detection in Multi-view 3D Reconstruction Using Semantic and Geometric Context

    NASA Astrophysics Data System (ADS)

    Weinshall, D.; Golbert, A.

    2013-10-01

    We present a method for object detection in a multi view 3D model. We use highly overlapping views, geometric data, and semantic surface classification in order to boost existing 2D algorithms. Specifically, a 3D model is computed from the overlapping views, and the model is segmented into semantic labels using height information, color and planar qualities. 2D detector is run on all images and then detections are mapped into 3D via the model. The detections are clustered in 3D and represented by 3D boxes. Finally, the detections, visibility maps and semantic labels are combined using a Support Vector Machine to achieve a more robust object detector.

  2. Graph representation by medial axis transform for 3D image retrieval

    NASA Astrophysics Data System (ADS)

    Kim, Duck H.; Yun, Il D.; Lee, Sang U.

    2001-04-01

    Recently, the interests in the 3D image, generated from the range data and CAD, have exceedingly increased, accordingly a various 3D image database is being constructed. The efficient and fast scheme to access the desired image data is the important issue in the application area of the Internet and digital library. However, it is difficult to manage the 3D image database because of its huge size. Therefore, a proper descriptor is necessary to manage the data efficiently, including the content-based search. In this paper, the proposed shape descriptor is based on the voxelization of the 3D image. The medial axis transform, stemming from the mathematical morphology, is performed on the voxelized 3D image and the graph, which is composed of nodes and edges, is generated from skeletons. The generated graph is adequate to the novel shape descriptor due to no loss of geometric information and the similarity of the insight of the human. Therefore the proposed shape descriptor would be useful for the recognition of 3D object, compression, and content-based search.

  3. 3D reconstruction based on multiple views for close-range objects

    NASA Astrophysics Data System (ADS)

    Ji, Zheng; Zhang, Jianqing

    2007-06-01

    It is difficult for traditional photogrammetry techniques to reconstruct 3D model of close-range objects. To overcome the restriction and realize complex objects' 3D reconstruction, we present a realistic approach on the basis of multi-baseline stereo vision. This incorporates the image matching based on short-baseline-multi-views, and 3D measurement based on multi-ray intersection, and the 3D reconstruction of the object's based on TIN or parametric geometric model. Different complex object are reconstructed by this way. The results demonstrate the feasibility and effectivity of the method.

  4. OB3D, a new set of 3D objects available for research: a web-based study

    PubMed Central

    Buffat, Stéphane; Chastres, Véronique; Bichot, Alain; Rider, Delphine; Benmussa, Frédéric; Lorenceau, Jean

    2014-01-01

    Studying object recognition is central to fundamental and clinical research on cognitive functions but suffers from the limitations of the available sets that cannot always be modified and adapted to meet the specific goals of each study. We here present a new set of 3D scans of real objects available on-line as ASCII files, OB3D. These files are lists of dots, each defined by a triplet of spatial coordinates and their normal that allow simple and highly versatile transformations and adaptations. We performed a web-based experiment to evaluate the minimal number of dots required for the denomination and categorization of these objects, thus providing a reference threshold. We further analyze several other variables derived from this data set, such as the correlations with object complexity. This new stimulus set, which was found to activate the Lower Occipital Complex (LOC) in another study, may be of interest for studies of cognitive functions in healthy participants and patients with cognitive impairments, including visual perception, language, memory, etc. PMID:25339920

  5. Reducing Non-Uniqueness in Satellite Gravity Inversion using 3D Object Oriented Image Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Fadel, I.; van der Meijde, M.; Kerle, N.

    2013-12-01

    Non-uniqueness of satellite gravity interpretation has been usually reduced by using a priori information from various sources, e.g. seismic tomography models. The reduction in non-uniqueness has been based on velocity-density conversion formulas or user interpretation for 3D subsurface structures (objects) in seismic tomography models. However, these processes introduce additional uncertainty through the conversion relations due to the dependency on the other physical parameters such as temperature and pressure, or through the bias in the interpretation due to user choices and experience. In this research, a new methodology is introduced to extract the 3D subsurface structures from 3D geophysical data using a state-of-art 3D Object Oriented Image Analysis (OOA) technique. 3D OOA is tested using a set of synthetic models that simulate the real situation in the study area of this research. Then, 3D OOA is used to extract 3D subsurface objects from a real 3D seismic tomography model. The extracted 3D objects are used to reconstruct a forward model and its response is compared with the measured satellite gravity. Finally, the result of the forward modelling, based on the extracted 3D objects, is used to constrain the inversion process of satellite gravity data. Through this work, a new object-based approach is introduced to interpret and extract the 3D subsurface objects from 3D geophysical data. This can be used to constrain modelling and inversion of potential field data using the extracted 3D subsurface structures from other methods. In summary, a new approach is introduced to constrain inversion of satellite gravity measurements and enhance interpretation capabilities.

  6. An object-oriented 3D integral data model for digital city and digital mine

    NASA Astrophysics Data System (ADS)

    Wu, Lixin; Wang, Yanbing; Che, Defu; Xu, Lei; Chen, Xuexi; Jiang, Yun; Shi, Wenzhong

    2005-10-01

    With the rapid development of urban, city space extended from surface to subsurface. As the important data source for the representation of city spatial information, 3D city spatial data have the characteristics of multi-object, heterogeneity and multi-structure. It could be classified referring to the geo-surface into three kinds: above-surface data, surface data and subsurface data. The current research on 3D city spatial information system is divided naturally into two different branch, 3D City GIS (3D CGIS) and 3D Geological Modeling (3DGM). The former emphasizes on the 3D visualization of buildings and the terrain of city, while the latter emphasizes on the visualization of geological bodies and structures. Although, it is extremely important for city planning and construction to integrate all the city spatial information including above-surface, surface and subsurface objects to conduct integral analysis and spatial manipulation. However, either 3D CGIS or 3DGM is currently difficult to realize the information integration, integral analysis and spatial manipulation. Considering 3D spatial modeling theory and methodologies, an object-oriented 3D integral spatial data model (OO3D-ISDM) is presented and software realized. The model integrates geographical objects, surface buildings and geological objects together seamlessly with TIN being its coupling interface. This paper introduced the conceptual model of OO3D-ISDM, which is comprised of 4 spatial elements, i.e. point, line, face and body, and 4 geometric primitives, i.e. vertex, segment, triangle and generalized tri-prism (GTP). The spatial model represents the geometry of surface buildings and geographical objects with triangles, and geological objects with GTP. Any of the represented objects, no mater surface buildings, terrain or subsurface objects, could be described with the basic geometry element, i.e. triangle. So the 3D spatial objects, surface buildings, terrain and geological objects can be

  7. Whole versus Part Presentations of the Interactive 3D Graphics Learning Objects

    ERIC Educational Resources Information Center

    Azmy, Nabil Gad; Ismaeel, Dina Ahmed

    2010-01-01

    The purpose of this study is to present an analysis of how the structure and design of the Interactive 3D Graphics Learning Objects can be effective and efficient in terms of Performance, Time on task, and Learning Efficiency. The study explored two treatments, namely whole versus Part Presentations of the Interactive 3D Graphics Learning Objects,…

  8. Whole versus Part Presentations of the Interactive 3D Graphics Learning Objects

    ERIC Educational Resources Information Center

    Azmy, Nabil Gad; Ismaeel, Dina Ahmed

    2010-01-01

    The purpose of this study is to present an analysis of how the structure and design of the Interactive 3D Graphics Learning Objects can be effective and efficient in terms of Performance, Time on task, and Learning Efficiency. The study explored two treatments, namely whole versus Part Presentations of the Interactive 3D Graphics Learning Objects,…

  9. 3D objects enlargement technique using an optical system and multiple SLMs for electronic holography.

    PubMed

    Yamamoto, Kenji; Ichihashi, Yasuyuki; Senoh, Takanori; Oi, Ryutaro; Kurita, Taiichiro

    2012-09-10

    One problem in electronic holography, which is caused by the display performance of spatial light modulators (SLM), is that the size of reconstructed 3D objects is small. Although methods for increasing the size using multiple SLMs have been considered, they typically had the problem that some parts of 3D objects were missing as a result of the gap between adjacent SLMs or 3D objects lost the vertical parallax. This paper proposes a method of resolving this problem by locating an optical system containing a lens array and other components in front of multiple SLMs. We used an optical system and 9 SLMs to construct a device equivalent to an SLM with approximately 74,600,000 pixels and used this to reconstruct 3D objects in both the horizontal and vertical parallax with an image size of 63 mm without losing any part of 3D objects.

  10. Distortion-tolerant 3-D object recognition by using single exposure on-axis digital holography.

    PubMed

    Kim, Daesuk; Javidi, Bahram

    2004-11-01

    We present a distortion-tolerant 3-D object recognition system using single exposure on-axis digital holography. In contrast to distortion-tolerant 3-D object recognition employing conventional phase shifting scheme which requires multiple exposures, our proposed method requires only one single digital hologram to be synthesized and used for distortion-tolerant 3-D object recognition. A benefit of the single exposure based on-axis approach is enhanced practicality of digital holography for distortion-tolerant 3-D object recognition in terms of its simplicity and high tolerance to external scene parameters such as moving targets. This paper shows experimentally, that single exposure on-axis digital holography is capable of providing a distortion-tolerant 3-D object recognition capability.

  11. Laser Fabrication of Affective 3D Objects with 1/f Fluctuation

    NASA Astrophysics Data System (ADS)

    Maekawa, Katsuhiro; Nishii, Tomohiro; Hayashi, Terutake; Akabane, Hideo; Agu, Masahiro

    The present paper describes the application of Kansei Engineering to the physical design of engineering products as well as its realization by laser sintering. We have investigated the affective information that might be included in three-dimensional objects such as a ceramic bowl for the tea ceremony. First, an X-ray CT apparatus is utilized to retrieve surface data from the teabowl, and then a frequency analysis is carried out after noise has been filtered. The surface fluctuation is characterized by a power spectrum that is in inverse proportion to the wave number f in circumference. Second, we consider how to realize the surface with a 1/f fluctuation on a computer screen using a 3D CAD model. The fluctuation is applied to a reference shape assuming that the outer surface has a spiral flow line on which unevenness is superimposed. Finally, the selective laser sintering method has been applied to the fabrication of 1/f fluctuation objects. Nylon powder is sintered layer by layer using a CO2 laser to form an artificial teabowl with complicated surface contours.

  12. 3D video analysis of the novel object recognition test in rats.

    PubMed

    Matsumoto, Jumpei; Uehara, Takashi; Urakawa, Susumu; Takamura, Yusaku; Sumiyoshi, Tomiki; Suzuki, Michio; Ono, Taketoshi; Nishijo, Hisao

    2014-10-01

    The novel object recognition (NOR) test has been widely used to test memory function. We developed a 3D computerized video analysis system that estimates nose contact with an object in Long Evans rats to analyze object exploration during NOR tests. The results indicate that the 3D system reproducibly and accurately scores the NOR test. Furthermore, the 3D system captures a 3D trajectory of the nose during object exploration, enabling detailed analyses of spatiotemporal patterns of object exploration. The 3D trajectory analysis revealed a specific pattern of object exploration in the sample phase of the NOR test: normal rats first explored the lower parts of objects and then gradually explored the upper parts. A systematic injection of MK-801 suppressed changes in these exploration patterns. The results, along with those of previous studies, suggest that the changes in the exploration patterns reflect neophobia to a novel object and/or changes from spatial learning to object learning. These results demonstrate that the 3D tracking system is useful not only for detailed scoring of animal behaviors but also for investigation of characteristic spatiotemporal patterns of object exploration. The system has the potential to facilitate future investigation of neural mechanisms underlying object exploration that result from dynamic and complex brain activity.

  13. GestAction3D: A Platform for Studying Displacements and Deformations of 3D Objects Using Hands

    NASA Astrophysics Data System (ADS)

    Lingrand, Diane; Renevier, Philippe; Pinna-Déry, Anne-Marie; Cremaschi, Xavier; Lion, Stevens; Rouel, Jean-Guilhem; Jeanne, David; Cuisinaud, Philippe; Soula*, Julien

    We present a low-cost hand-based device coupled with a 3D motion recovery engine and 3D visualization. This platform aims at studying ergonomic 3D interactions in order to manipulate and deform 3D models by interacting with hands on 3D meshes. Deformations are done using different modes of interaction that we will detail in the paper. Finger extremities are attached to vertices, edges or facets. Switching from one mode to another or changing the point of view is done using gestures. The determination of the more adequate gestures is part of the work

  14. 3-D Object Recognition Using Combined Overhead And Robot Eye-In-Hand Vision System

    NASA Astrophysics Data System (ADS)

    Luc, Ren C.; Lin, Min-Hsiung

    1987-10-01

    A new approach for recognizing 3-D objects using a combined overhead and eye-in-hand vision system is presented. A novel eye-in-hand vision system using a fiber-optic image array is described. The significance of this approach is the fast and accurate recognition of 3-D object information compared to traditional stereo image processing. For the recognition of 3-D objects, the over-head vision system will take 2-D top view image and the eye-in-hand vision system will take side view images orthogonal to the top view image plane. We have developed and demonstrated a unique approach to integrate this 2-D information into a 3-D representation based on a new approach called "3-D Volumetric Descrip-tion from 2-D Orthogonal Projections". The Unimate PUMA 560 and TRAPIX 5500 real-time image processor have been used to test the success of the entire system.

  15. Visual Short-Term Memory Benefit for Objects on Different 3-D Surfaces

    ERIC Educational Resources Information Center

    Xu, Yaoda; Nakayama, Ken

    2007-01-01

    Visual short-term memory (VSTM) plays an important role in visual cognition. Although objects are located on different 3-dimensional (3-D) surfaces in the real world, how VSTM capacity may be influenced by the presence of multiple 3-D surfaces has never been examined. By manipulating binocular disparities of visual displays, the authors found that…

  16. Simultaneous perimeter measurement for 3D object with a binocular stereo vision measurement system

    NASA Astrophysics Data System (ADS)

    Peng, Zhao; Guo-Qiang, Ni

    2010-04-01

    A simultaneous measurement scheme for multiple three-dimensional (3D) objects' surface boundary perimeters is proposed. This scheme consists of three steps. First, a binocular stereo vision measurement system with two CCD cameras is devised to obtain the two images of the detected objects' 3D surface boundaries. Second, two geodesic active contours are applied to converge to the objects' contour edges simultaneously in the two CCD images to perform the stereo matching. Finally, the multiple spatial contours are reconstructed using the cubic B-spline curve interpolation. The true contour length of every spatial contour is computed as the true boundary perimeter of every 3D object. An experiment on the bent surface's perimeter measurement for the four 3D objects indicates that this scheme's measurement repetition error decreases to 0.7 mm.

  17. Electro-holography display using computer generated hologram of 3D objects based on projection spectra

    NASA Astrophysics Data System (ADS)

    Huang, Sujuan; Wang, Duocheng; He, Chao

    2012-11-01

    A new method of synthesizing computer-generated hologram of three-dimensional (3D) objects is proposed from their projection images. A series of projection images of 3D objects are recorded with one-dimensional azimuth scanning. According to the principles of paraboloid of revolution in 3D Fourier space and 3D central slice theorem, spectra information of 3D objects can be gathered from their projection images. Considering quantization error of horizontal and vertical directions, the spectrum information from each projection image is efficiently extracted in double circle and four circles shape, to enhance the utilization of projection spectra. Then spectra information of 3D objects from all projection images is encoded into computer-generated hologram based on Fourier transform using conjugate-symmetric extension. The hologram includes 3D information of objects. Experimental results for numerical reconstruction of the CGH at different distance validate the proposed methods and show its good performance. Electro-holographic reconstruction can be realized by using an electronic addressing reflective liquid-crystal display (LCD) spatial light modulator. The CGH from the computer is loaded onto the LCD. By illuminating a reference light from a laser source to the LCD, the amplitude and phase information included in the CGH will be reconstructed due to the diffraction of the light modulated by the LCD.

  18. The 3D scanner prototype utilize object profile imaging using line laser and octave software

    NASA Astrophysics Data System (ADS)

    Nurdini, Mugi; Manunggal, Trikarsa Tirtadwipa; Samsi, Agus

    2016-11-01

    Three-dimensional scanner or 3D Scanner is a device to reconstruct the real object into digital form on a computer. 3D Scanner is a technology that is being developed, especially in developed countries, where the current 3D Scanner devices is the advanced version with a very expensive prices. This study is basically a simple prototype of 3D Scanner with a very low investment costs. 3D Scanner prototype device consists of a webcam, a rotating desk system controlled by a stepper motor and Arduino UNO, and a line laser. Objects that limit the research is the object with same radius from its center point (object pivot). Scanning is performed by using object profile imaging by line laser which is then captured by the camera and processed by a computer (image processing) using Octave software. On each image acquisition, the scanned object on a rotating desk rotated by a certain degree, so for one full turn multiple images of a number of existing side are finally obtained. Then, the profile of the entire images is extracted in order to obtain digital object dimension. Digital dimension is calibrated by length standard, called gage block. Overall dimensions are then digitally reconstructed into a three-dimensional object. Validation of the scanned object reconstruction of the original object dimensions expressed as a percentage error. Based on the results of data validation, horizontal dimension error is about 5% to 23% and vertical dimension error is about +/- 3%.

  19. Synthesis and display of dynamic holographic 3D scenes with real-world objects.

    PubMed

    Paturzo, Melania; Memmolo, Pasquale; Finizio, Andrea; Näsänen, Risto; Naughton, Thomas J; Ferraro, Pietro

    2010-04-26

    A 3D scene is synthesized combining multiple optically recorded digital holograms of different objects. The novel idea consists of compositing moving 3D objects in a dynamic 3D scene using a process that is analogous to stop-motion video. However in this case the movie has the exciting attribute that it can be displayed and observed in 3D. We show that 3D dynamic scenes can be projected as an alternative to complicated and heavy computations needed to generate realistic-looking computer generated holograms. The key tool for creating the dynamic action is based on a new concept that consists of a spatial, adaptive transformation of digital holograms of real-world objects allowing full control in the manipulation of the object's position and size in a 3D volume with very high depth-of-focus. A pilot experiment to evaluate how viewers perceive depth in a conventional single-view display of the dynamic 3D scene has been performed.

  20. Fast calculation of computer-generated holograms based on 3-D Fourier spectrum for omnidirectional diffraction from a 3-D voxel-based object.

    PubMed

    Sando, Yusuke; Barada, Daisuke; Yatagai, Toyohiko

    2012-09-10

    We have derived the basic spectral relation between a 3-D object and its 2-D diffracted wavefront by interpreting the diffraction calculation in the 3-D Fourier domain. Information on the 3-D object, which is inherent in the diffracted wavefront, becomes clear by using this relation. After the derivation, a method for obtaining the Fourier spectrum that is required to synthesize a hologram with a realistic sampling number for visible light is described. Finally, to verify the validity and the practicality of the above-mentioned spectral relation, fast calculation of a series of wavefronts radially diffracted from a 3-D voxel-based object is demonstrated.

  1. Electrophysiological evidence of separate pathways for the perception of depth and 3D objects.

    PubMed

    Gao, Feng; Cao, Bihua; Cao, Yunfei; Li, Fuhong; Li, Hong

    2015-05-01

    Previous studies have investigated the neural mechanism of 3D perception, but the neural distinction between 3D-objects and depth processing remains unclear. In the present study, participants viewed three types of graphics (planar graphics, perspective drawings, and 3D objects) while event-related potentials (ERP) were recorded. The ERP results revealed the following: (1) 3D objects elicited a larger and delayed N1 component than the other two types of stimuli; (2) during the P2 time window, significant differences between 3D objects and the perspective drawings were found mainly over a group of electrode sites in the left lateral occipital region; and (3) during the N2 complex, differences between planar graphics and perspective drawings were found over a group of electrode sites in the right hemisphere, whereas differences between perspective drawings and 3D objects were observed at another group of electrode sites in the left hemisphere. These findings support the claim that depth processing and object identification might be processed by separate pathways and at different latencies. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Depth representation of moving 3-D objects in apparent-motion path.

    PubMed

    Hidaka, Souta; Kawachi, Yousuke; Gyoba, Jiro

    2008-01-01

    Apparent motion is perceived when two objects are presented alternately at different positions. The internal representations of apparently moving objects are formed in an apparent-motion path which lacks physical inputs. We investigated the depth information contained in the representation of 3-D moving objects in an apparent-motion path. We examined how probe objects-briefly placed in the motion path-affected the perceived smoothness of apparent motion. The probe objects comprised 3-D objects which were defined by being shaded or by disparity (convex/concave) or 2-D (flat) objects, while the moving objects were convex/concave objects. We found that flat probe objects induced a significantly smoother motion perception than concave probe objects only in the case of the convex moving objects. However, convex probe objects did not lead to smoother motion as the flat objects did, although the convex probe objects contained the same depth information for the moving objects. Moreover, the difference between probe objects was reduced when the moving objects were concave. These counterintuitive results were consistent in conditions when both depth cues were used. The results suggest that internal representations contain incomplete depth information that is intermediate between that of 2-D and 3-D objects.

  3. Plane-based optimization for 3D object reconstruction from single line drawings.

    PubMed

    Liu, Jianzhuang; Cao, Liangliang; Li, Zhenguo; Tang, Xiaoou

    2008-02-01

    In previous optimization-based methods of 3D planar-faced object reconstruction from single 2D line drawings, the missing depths of the vertices of a line drawing (and other parameters in some methods) are used as the variables of the objective functions. A 3D object with planar faces is derived by finding values for these variables that minimize the objective functions. These methods work well for simple objects with a small number N of variables. As N grows, however, it is very difficult for them to find expected objects. This is because with the nonlinear objective functions in a space of large dimension N, the search for optimal solutions can easily get trapped into local minima. In this paper, we use the parameters of the planes that pass through the planar faces of an object as the variables of the objective function. This leads to a set of linear constraints on the planes of the object, resulting in a much lower dimensional nullspace where optimization is easier to achieve. We prove that the dimension of this nullspace is exactly equal to the minimum number of vertex depths which define the 3D object. Since a practical line drawing is usually not an exact projection of a 3D object, we expand the nullspace to a larger space based on the singular value decomposition of the projection matrix of the line drawing. In this space, robust 3D reconstruction can be achieved. Compared with two most related methods, our method not only can reconstruct more complex 3D objects from 2D line drawings, but also is computationally more efficient.

  4. Phase-retrieved optical projection tomography for 3D imaging through scattering layers

    NASA Astrophysics Data System (ADS)

    Ancora, Daniele; Di Battista, Diego; Giasafaki, Georgia; Psycharakis, Stylianos; Liapis, Evangelos; Zacharopoulos, Athanasios; Zacharakis, Giannis

    2016-03-01

    Recently great progress has been made in biological and biomedical imaging by combining non-invasive optical methods, novel adaptive light manipulation and computational techniques for intensity-based phase recovery and three dimensional image reconstruction. In particular and in relation to the work presented here, Optical Projection Tomography (OPT) is a well-established technique for imaging mostly transparent absorbing biological models such as C. Elegans and Danio Rerio. On the contrary, scattering layers like the cocoon surrounding the Drosophila during the pupae stage constitutes a challenge for three dimensional imaging through such a complex structure. However, recent studies enabled image reconstruction through scattering curtains up to few transport mean free paths via phase retrieval iterative algorithms allowing to uncover objects hidden behind complex layers. By combining these two techniques we explore the possibility to perform a three dimensional image reconstruction of fluorescent objects embedded between scattering layers without compromising its structural integrity. Dynamical cross correlation registration was implemented for the registration process due to translational and flipping ambiguity of the phase retrieval problem, in order to provide the correct aligned set of data to perform the back-projection reconstruction. We have thus managed to reconstruct a hidden complex object between static scattering curtains and compared with the effective reconstruction to fully understand the process before the in-vivo biological implementation.

  5. A standardized set of 3-D objects for virtual reality research and applications.

    PubMed

    Peeters, David

    2017-06-23

    The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theories in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3-D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3-D objects for virtual reality research is important, because reaching valid theoretical conclusions hinges critically on the use of well-controlled experimental stimuli. Sharing standardized 3-D objects across different virtual reality labs will allow for science to move forward more quickly.

  6. Computing 3-D structure of rigid objects using stereo and motion

    NASA Technical Reports Server (NTRS)

    Nguyen, Thinh V.

    1987-01-01

    Work performed as a step toward an intelligent automatic machine vision system for 3-D imaging is discussed. The problem considered is the quantitative 3-D reconstruction of rigid objects. Motion and stereo are the two clues considered in this system. The system basically consists of three processes: the low level process to extract image features, the middle level process to establish the correspondence in the stereo (spatial) and motion (temporal) modalities, and the high level process to compute the 3-D coordinates of the corner points by integrating the spatial and temporal correspondences.

  7. Programming self assembly by designing the 3D shape of floating objects

    NASA Astrophysics Data System (ADS)

    Poty, Martin; Lagubeau, Guillaume; Lumay, Geoffroy; Vandewalle, Nicolas

    2014-11-01

    Self-assembly of floating particles driven by capillary forces at some liquid-air interface leads to the formation of two-dimensionnal structures. Using a 3d printer, milimeter scale objets are produced. Their 3d shape is chosen in order to create capillary multipoles. The capillary interactions between these components can be either attractive or repulsive depending on the interface local deformations along the liquid-air interface. In order to understand how the shape of an object deforms the interface, we developed an original profilometry method. The measurements show that specific structures can be programmed by selecting the 3d branched shapes.

  8. Multiple capture locations for 3D ultrasound-guided robotic retrieval of moving bodies from a beating heart

    NASA Astrophysics Data System (ADS)

    Thienphrapa, Paul; Ramachandran, Bharat; Elhawary, Haytham; Taylor, Russell H.; Popovic, Aleksandra

    2012-02-01

    Free moving bodies in the heart pose a serious health risk as they may be released in the arteries causing blood flow disruption. These bodies may be the result of various medical conditions and trauma. The conventional approach to removing these objects involves open surgery with sternotomy, the use of cardiopulmonary bypass, and a wide resection of the heart muscle. We advocate a minimally invasive surgical approach using a flexible robotic end effector guided by 3D transesophageal echocardiography. In a phantom study, we track a moving body in a beating heart using a modified normalized cross-correlation method, with mean RMS errors of 2.3 mm. We previously found the foreign body motion to be fast and abrupt, rendering infeasible a retrieval method based on direct tracking. We proposed a strategy based on guiding a robot to the most spatially probable location of the fragment and securing it upon its reentry to said location. To improve efficacy in the context of a robotic retrieval system, we extend this approach by exploring multiple candidate capture locations. Salient locations are identified based on spatial probability, dwell time, and visit frequency; secondary locations are also examined. Aggregate results indicate that the location of highest spatial probability (50% occupancy) is distinct from the longest-dwelled location (0.84 seconds). Such metrics are vital in informing the design of a retrieval system and capture strategies, and they can be computed intraoperatively to select the best capture location based on constraints such as workspace, time, and device manipulability. Given the complex nature of fragment motion, the ability to analyze multiple capture locations is a desirable capability in an interventional system.

  9. Intelligent multisensor concept for image-guided 3D object measurement with scanning laser radar

    NASA Astrophysics Data System (ADS)

    Weber, Juergen

    1995-08-01

    This paper presents an intelligent multisensor concept for measuring 3D objects using an image guided laser radar scanner. The field of application are all kinds of industrial inspection and surveillance tasks where it is necessary to detect, measure and recognize 3D objects in distances up to 10 m with high flexibility. Such applications might be the surveillance of security areas or container storages as well as navigation and collision avoidance of autonomous guided vehicles. The multisensor system consists of a standard CCD matrix camera and a 1D laser radar ranger which is mounted to a 2D mirror scanner. With this sensor combination it is possible to acquire gray scale intensity data as well as absolute 3D information. To improve the system performance and flexibility, the intensity data of the scene captured by the camera can be used to focus the measurement of the 3D sensor to relevant areas. The camera guidance of the laser scanner is useful because the acquisition of spatial information is relatively slow compared to the image sensor's ability to snap an image frame in 40 ms. Relevant areas in a scene are located by detecting edges of objects utilizing various image processing algorithms. The complete sensor system is controlled by three microprocessors carrying out the 3D data acquisition, the image processing tasks and the multisensor integration. The paper deals with the details of the multisensor concept. It describes the process of sensor guidance and 3D measurement and presents some practical results of our research.

  10. 2D virtual texture on 3D real object with coded structured light

    NASA Astrophysics Data System (ADS)

    Molinier, Thierry; Fofi, David; Salvi, Joaquim; Gorria, Patrick

    2008-02-01

    Augmented reality is used to improve color segmentation on human body or on precious no touch artifacts. We propose a technique to project a synthesized texture on real object without contact. Our technique can be used in medical or archaeological application. By projecting a suitable set of light patterns onto the surface of a 3D real object and by capturing images with a camera, a large number of correspondences can be found and the 3D points can be reconstructed. We aim to determine these points of correspondence between cameras and projector from a scene without explicit points and normals. We then project an adjusted texture onto the real object surface. We propose a global and automatic method to virtually texture a 3D real object.

  11. Optimal Local Searching for Fast and Robust Textureless 3D Object Tracking in Highly Cluttered Backgrounds.

    PubMed

    Seo, Byung-Kuk; Park, Jong-Il; Hinterstoisser, Stefan; Ilic, Slobodan

    2013-06-13

    Edge-based tracking is a fast and plausible approach for textureless 3D object tracking, but its robustness is still very challenging in highly cluttered backgrounds due to numerous local minima. To overcome this problem, we propose a novel method for fast and robust textureless 3D object tracking in highly cluttered backgrounds. The proposed method is based on optimal local searching of 3D-2D correspondences between a known 3D object model and 2D scene edges in an image with heavy background clutter. In our searching scheme, searching regions are partitioned into three levels (interior, contour, and exterior) with respect to the previous object region, and confident searching directions are determined by evaluating candidates of correspondences on their region levels; thus, the correspondences are searched among likely candidates in only the confident directions instead of searching through all candidates. To ensure the confident searching direction, we also adopt the region appearance, which is efficiently modeled on a newly defined local space (called a searching bundle). Experimental results and performance evaluations demonstrate that our method fully supports fast and robust textureless 3D object tracking even in highly cluttered backgrounds.

  12. Optimal local searching for fast and robust textureless 3D object tracking in highly cluttered backgrounds.

    PubMed

    Seo, Byung-Kuk; Park, Hanhoon; Park, Jong-Il; Hinterstoisser, Stefan; Ilic, Slobodan

    2014-01-01

    Edge-based tracking is a fast and plausible approach for textureless 3D object tracking, but its robustness is still very challenging in highly cluttered backgrounds due to numerous local minima. To overcome this problem, we propose a novel method for fast and robust textureless 3D object tracking in highly cluttered backgrounds. The proposed method is based on optimal local searching of 3D-2D correspondences between a known 3D object model and 2D scene edges in an image with heavy background clutter. In our searching scheme, searching regions are partitioned into three levels (interior, contour, and exterior) with respect to the previous object region, and confident searching directions are determined by evaluating candidates of correspondences on their region levels; thus, the correspondences are searched among likely candidates in only the confident directions instead of searching through all candidates. To ensure the confident searching direction, we also adopt the region appearance, which is efficiently modeled on a newly defined local space (called a searching bundle). Experimental results and performance evaluations demonstrate that our method fully supports fast and robust textureless 3D object tracking even in highly cluttered backgrounds.

  13. Robust 3D Object Tracking from Monocular Images using Stable Parts.

    PubMed

    Crivellaro, Alberto; Rad, Mahdi; Verdie, Yannick; Yi, Kwang Moo; Fua, Pascal; Lepetit, Vincent

    2017-05-26

    We present an algorithm for estimating the pose of a rigid object in real-time under challenging conditions. Our method effectively handles poorly textured objects in cluttered, changing environments, even when their appearance is corrupted by large occlusions, and it relies on grayscale images to handle metallic environments on which depth cameras would fail. As a result, our method is suitable for practical Augmented Reality applications including industrial environments. At the core of our approach is a novel representation for the 3D pose of object parts: We predict the 3D pose of each part in the form of the 2D projections of a few control points. The advantages of this representation is three-fold: We can predict the 3D pose of the object even when only one part is visible; when several parts are visible, we can easily combine them to compute a better pose of the object; the 3D pose we obtain is usually very accurate, even when only few parts are visible. We show how to use this representation in a robust 3D tracking framework. In addition to extensive comparisons with the state-of-the-art, we demonstrate our method on a practical Augmented Reality application for maintenance assistance in the ATLAS particle detector at CERN.

  14. Combining Abundance/Temperature Retrieval with 3D Atmospheric Circulation Simulations of Hot Jupiters

    NASA Astrophysics Data System (ADS)

    Heng, Kevin

    2011-09-01

    The atmospheres of hot Jupiters are three-dimensional, non-linear entities and understanding them requires the construction of a hierarchy of models of varying sophistication. Since previous work has either focused on the atmospheric dynamics or implemented multi-band radiative transfer, a reasonable approach is to combine the treatment of 3D dynamics with dual-band radiative transfer, where the assumption is that the stellar irradiation and re-emitted radiation from the exoplanet are at distinct wavelengths. I report on the successful implementation of such a setup and demonstrate how it can be used to compute self-consistent temperature-pressure profiles on both the day and night sides of a hot Jupiter, as well as zonal-wind profiles, circulation cell patterns and the angular/temporal offset of the hotspot from the substellar point. In particular, the hotspot offset should aid us in distinguishing between different types of hot Jupiter atmospheres. Together with N. Madhusudhan, we combine the dual-band simulation technique with the abundance/temperature retrieval method of Madhusudhan & Seager, by empirically constraining a range of values for the broad-band opacities which are consistent with the current observations. The advantage of our novel method is that the range of opacities used improves with time as the observations get better. The ability to thoroughly, efficiently and systematically explore the interplay between atmospheric dynamics, radiation and synthetic spectra is an important step forward, as it prepares us for the theoretical interpretation of exoplanetary spectra which will be obtained by future space-based missions such as JWST and EChO. I acknowledge generous support from the Zwicky Prize Fellowship and the Star and Planet Formation Group (PI: Michael Meyer) at ETH Zurich.

  15. 3-D Laser-Based Multiclass and Multiview Object Detection in Cluttered Indoor Scenes.

    PubMed

    Zhang, Xuesong; Zhuang, Yan; Hu, Huosheng; Wang, Wei

    2017-01-01

    This paper investigates the problem of multiclass and multiview 3-D object detection for service robots operating in a cluttered indoor environment. A novel 3-D object detection system using laser point clouds is proposed to deal with cluttered indoor scenes with a fewer and imbalanced training data. Raw 3-D point clouds are first transformed to 2-D bearing angle images to reduce the computational cost, and then jointly trained multiple object detectors are deployed to perform the multiclass and multiview 3-D object detection. The reclassification technique is utilized on each detected low confidence bounding box in the system to reduce false alarms in the detection. The RUS-SMOTEboost algorithm is used to train a group of independent binary classifiers with imbalanced training data. Dense histograms of oriented gradients and local binary pattern features are combined as a feature set for the reclassification task. Based on the dalian university of technology (DUT)-3-D data set taken from various office and household environments, experimental results show the validity and good performance of the proposed method.

  16. Lossy to lossless object-based coding of 3-D MRI data.

    PubMed

    Menegaz, Gloria; Thiran, Jean-Philippe

    2002-01-01

    We propose a fully three-dimensional (3-D) object-based coding system exploiting the diagnostic relevance of the different regions of the volumetric data for rate allocation. The data are first decorrelated via a 3-D discrete wavelet transform. The implementation via the lifting steps scheme allows to map integer-to-integer values, enabling lossless coding, and facilitates the definition of the object-based inverse transform. The coding process assigns disjoint segments of the bitstream to the different objects, which can be independently accessed and reconstructed at any up-to-lossless quality. Two fully 3-D coding strategies are considered: embedded zerotree coding (EZW-3D) and multidimensional layered zero coding (MLZC), both generalized for region of interest (ROI)-based processing. In order to avoid artifacts along region boundaries, some extra coefficients must be encoded for each object. This gives rise to an overheading of the bitstream with respect to the case where the volume is encoded as a whole. The amount of such extra information depends on both the filter length and the decomposition depth. The system is characterized on a set of head magnetic resonance images. Results show that MLZC and EZW-3D have competitive performances. In particular, the best MLZC mode outperforms the others state-of-the-art techniques on one of the datasets for which results are available in the literature.

  17. 3D terahertz synthetic aperture imaging of objects with arbitrary boundaries

    NASA Astrophysics Data System (ADS)

    Kniffin, G. P.; Zurk, L. M.; Schecklman, S.; Henry, S. C.

    2013-09-01

    Terahertz (THz) imaging has shown promise for nondestructive evaluation (NDE) of a wide variety of manufactured products including integrated circuits and pharmaceutical tablets. Its ability to penetrate many non-polar dielectrics allows tomographic imaging of an object's 3D structure. In NDE applications, the material properties of the target(s) and background media are often well-known a priori and the objective is to identify the presence and/or 3D location of structures or defects within. The authors' earlier work demonstrated the ability to produce accurate 3D images of conductive targets embedded within a high-density polyethylene (HDPE) background. That work assumed a priori knowledge of the refractive index of the HDPE as well as the physical location of the planar air-HDPE boundary. However, many objects of interest exhibit non-planar interfaces, such as varying degrees of curvature over the extent of the surface. Such irregular boundaries introduce refraction effects and other artifacts that distort 3D tomographic images. In this work, two reconstruction techniques are applied to THz synthetic aperture tomography; a holographic reconstruction method that accurately detects the 3D location of an object's irregular boundaries, and a split­-step Fourier algorithm that corrects the artifacts introduced by the surface irregularities. The methods are demonstrated with measurements from a THz time-domain imaging system.

  18. Pose detection of a 3D object using template matched filtering

    NASA Astrophysics Data System (ADS)

    Picos, Kenia; Díaz-Ramírez, Víctor H.

    2016-09-01

    The problem of 3D pose recognition of a rigid object is difficult to solve because the pose in a 3D space can vary with multiple degrees of freedom. In this work, we propose an accurate method for 3D pose estimation based on template matched filtering. The proposed method utilizes a bank of space-variant filters which take into account different pose states of the target and local statistical properties of the input scene. The state parameters of location coordinates, orientation angles, and scaling parameters of the target are estimated with high accuracy in the input scene. Experimental tests are performed for real and synthetic scenes. The proposed system yields good performance for 3D pose recognition in terms of detection efficiency, location and orientation errors.

  19. Three-dimensional object recognition using gradient descent and the universal 3-D array grammar

    NASA Astrophysics Data System (ADS)

    Baird, Leemon C., III; Wang, Patrick S. P.

    1992-02-01

    A new algorithm is presented for applying Marill's minimum standard deviation of angles (MSDA) principle for interpreting line drawings without models. Even though no explicit models or additional heuristics are included, the algorithm tends to reach the same 3-D interpretations of 2-D line drawings that humans do. Marill's original algorithm repeatedly generated a set of interpretations and chose the one with the lowest standard deviation of angles (SDA). The algorithm presented here explicitly calculates the partial derivatives of SDA with respect to all adjustable parameters, and follows this gradient to minimize SDA. For a picture with lines meeting at m points forming n angles, the gradient descent algorithm requires O(n) time to adjust all the points, while the original algorithm required O(mn) time to do so. For the pictures described by Marill, this gradient descent algorithm running on a Macintosh II was found to be one to two orders of magnitude faster than the original algorithm running on a Symbolics, while still giving comparable results. Once the 3-D interpretation of the line drawing has been found, the 3-D object can be reduced to a description string using the Universal 3-D Array Grammar. This is a general grammar which allows any connected object represented as a 3-D array of pixels to be reduced to a description string. The algorithm based on this grammar is well suited to parallel computation, and could run efficiently on parallel hardware. This paper describes both the MSDA gradient descent algorithm and the Universal 3-D Array Grammar algorithm. Together, they transform a 2-D line drawing represented as a list of line segments into a string describing the 3-D object pictured. The strings could then be used for object recognition, learning, or storage for later manipulation.

  20. 3D polymer objects with electronic components interconnected via conformally printed electrodes.

    PubMed

    Jo, Yejin; Kim, Ju Young; Jung, Sungmook; Ahn, Bok Yeop; Lewis, Jennifer A; Choi, Youngmin; Jeong, Sunho

    2017-10-12

    We report the fabrication of 3D polymer objects that contain electrical components interconnected by conductive silver/carbon nanotube inks printed conformally onto their surfaces and through vertical vias. Electrical components are placed within internal cavities and recessed surfaces of polymer objects produced by stereolithography. Conformally printed electrodes that interconnect each electrical component exhibit a conductivity of ∼2 × 10(4) S cm(-1) upon annealing at temperatures below 100 °C. Multiple 3D objects were created to demonstrate this hybrid additive manufacturing approach, including those with an embedded circuit operated by an air-suspended switch and a 3D circuit board composed of microcontroller unit, resistor, battery, light-emitting diode and sensor.

  1. High-purity 3D nano-objects grown by focused-electron-beam induced deposition

    NASA Astrophysics Data System (ADS)

    Córdoba, Rosa; Sharma, Nidhi; Kölling, Sebastian; Koenraad, Paul M.; Koopmans, Bert

    2016-09-01

    To increase the efficiency of current electronics, a specific challenge for the next generation of memory, sensing and logic devices is to find suitable strategies to move from two- to three-dimensional (3D) architectures. However, the creation of real 3D nano-objects is not trivial. Emerging non-conventional nanofabrication tools are required for this purpose. One attractive method is focused-electron-beam induced deposition (FEBID), a direct-write process of 3D nano-objects. Here, we grow 3D iron and cobalt nanopillars by FEBID using diiron nonacarbonyl Fe2(CO)9, and dicobalt octacarbonyl Co2(CO)8, respectively, as starting materials. In addition, we systematically study the composition of these nanopillars at the sub-nanometer scale by atom probe tomography, explicitly mapping the homogeneity of the radial and longitudinal composition distributions. We show a way of fabricating high-purity 3D vertical nanostructures of ˜50 nm in diameter and a few micrometers in length. Our results suggest that the purity of such 3D nanoelements (above 90 at% Fe and above 95 at% Co) is directly linked to their growth regime, in which the selected deposition conditions are crucial for the final quality of the nanostructure. Moreover, we demonstrate that FEBID and the proposed characterization technique not only allow for growth and chemical analysis of single-element structures, but also offers a new way to directly study 3D core-shell architectures. This straightforward concept could establish a promising route to the design of 3D elements for future nano-electronic devices.

  2. Learning 3D Object Templates by Quantizing Geometry and Appearance Spaces.

    PubMed

    Hu, Wenze; Zhu, Song-Chun

    2015-06-01

    While 3D object-centered shape-based models are appealing in comparison with 2D viewer-centered appearance-based models for their lower model complexities and potentially better view generalizabilities, the learning and inference of 3D models has been much less studied in the recent literature due to two factors: i) the enormous complexities of 3D shapes in geometric space; and ii) the gap between 3D shapes and their appearances in images. This paper aims at tackling the two problems by studying an And-Or Tree (AoT) representation that consists of two parts: i) a geometry-AoT quantizing the geometry space, i.e. the possible compositions of 3D volumetric parts and 2D surfaces within the volumes; and ii) an appearance-AoT quantizing the appearance space, i.e. the appearance variations of those shapes in different views. In this AoT, an And-node decomposes an entity into constituent parts, and an Or-node represents alternative ways of decompositions. Thus it can express a combinatorial number of geometry and appearance configurations through small dictionaries of 3D shape primitives and 2D image primitives. In the quantized space, the problem of learning a 3D object template is transformed to a structure search problem which can be efficiently solved in a dynamic programming algorithm by maximizing the information gain. We focus on learning 3D car templates from the AoT and collect a new car dataset featuring more diverse views. The learned car templates integrate both the shape-based model and the appearance-based model to combine the benefits of both. In experiments, we show three aspects: 1) the AoT is more efficient than the frequently used octree method in space representation; 2) the learned 3D car template matches the state-of-the art performances on car detection and pose estimation in a public multi-view car dataset; and 3) in our new dataset, the learned 3D template solves the joint task of simultaneous object detection, pose/view estimation, and part

  3. 3D-Web-GIS RFID location sensing system for construction objects.

    PubMed

    Ko, Chien-Ho

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency.

  4. Whole object surface area and volume of partial-view 3D models

    NASA Astrophysics Data System (ADS)

    Mulukutla, Gopal K.; Genareau, Kimberly D.; Durant, Adam J.; Proussevitch, Alexander A.

    2017-08-01

    Micro-scale 3D models, important components of many studies in science and engineering, are often used to determine morphological characteristics such as shape, surface area and volume. The application of techniques such as stereoscopic scanning electron microscopy on whole objects often results in ‘partial-view’ models with a portion of object not within the field of view thus not captured in the 3D model. The nature and extent of the surface not captured is dependent on the complex interaction of imaging system attributes (e.g. working distance, viewing angle) with object size, shape and morphology. As a result, any simplistic assumptions in estimating whole object surface area or volume can lead to significant errors. In this study, we report on a novel technique to estimate the physical fraction of an object captured in a partial-view 3D model of an otherwise whole object. This allows a more accurate estimate of surface area and volume. Using 3D models, we demonstrate the robustness of this method and the accuracy of surface area and volume estimates relative to true values.

  5. 3D-Web-GIS RFID Location Sensing System for Construction Objects

    PubMed Central

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency. PMID:23864821

  6. 3D shape shearography with integrated structured light projection for strain inspection of curved objects

    NASA Astrophysics Data System (ADS)

    Anisimov, Andrei G.; Groves, Roger M.

    2015-05-01

    Shearography (speckle pattern shearing interferometry) is a non-destructive testing technique that provides full-field surface strain characterization. Although real-life objects especially in aerospace, transport or cultural heritage are not flat (e.g. aircraft leading edges or sculptures), their inspection with shearography is of interest for both hidden defect detection and material characterization. Accurate strain measuring of a highly curved or free form surface needs to be performed by combining inline object shape measuring and processing of shearography data in 3D. Previous research has not provided a general solution. This research is devoted to the practical questions of 3D shape shearography system development for surface strain characterization of curved objects. The complete procedure of calibration and data processing of a 3D shape shearography system with integrated structured light projector is presented. This includes an estimation of the actual shear distance and a sensitivity matrix correction within the system field of view. For the experimental part a 3D shape shearography system prototype was developed. It employs three spatially-distributed shearing cameras, with Michelson interferometers acting as the shearing devices, one illumination laser source and a structured light projector. The developed system performance was evaluated with a previously reported cylinder specimen (length 400 mm, external diameter 190 mmm) loaded by internal pressure. Further steps for the 3D shape shearography prototype and the technique development are also proposed.

  7. Higher-Order Neural Networks Applied to 2D and 3D Object Recognition

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Reid, Max B.

    1994-01-01

    A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.

  8. Identification of superficial defects in reconstructed 3D objects using phase-shifting fringe projection

    NASA Astrophysics Data System (ADS)

    Madrigal, Carlos A.; Restrepo, Alejandro; Branch, John W.

    2016-09-01

    3D reconstruction of small objects is used in applications of surface analysis, forensic analysis and tissue reconstruction in medicine. In this paper, we propose a strategy for the 3D reconstruction of small objects and the identification of some superficial defects. We applied a technique of projection of structured light patterns, specifically sinusoidal fringes and an algorithm of phase unwrapping. A CMOS camera was used to capture images and a DLP digital light projector for synchronous projection of the sinusoidal pattern onto the objects. We implemented a technique based on a 2D flat pattern as calibration process, so the intrinsic and extrinsic parameters of the camera and the DLP were defined. Experimental tests were performed in samples of artificial teeth, coal particles, welding defects and surfaces tested with Vickers indentation. Areas less than 5cm were studied. The objects were reconstructed in 3D with densities of about one million points per sample. In addition, the steps of 3D description, identification of primitive, training and classification were implemented to recognize defects, such as: holes, cracks, roughness textures and bumps. We found that pattern recognition strategies are useful, when quality supervision of surfaces has enough quantities of points to evaluate the defective region, because the identification of defects in small objects is a demanding activity of the visual inspection.

  9. Retrieval of Vegetation Structural Parameters and 3-D Reconstruction of Forest Canopies Using Ground-Based Echidna® Lidar

    NASA Astrophysics Data System (ADS)

    Strahler, A. H.; Yao, T.; Zhao, F.; Yang, X.; Schaaf, C.; Woodcock, C. E.; Jupp, D. L.; Culvenor, D.; Newnham, G.; Lovell, J.

    2010-12-01

    A ground-based, scanning, near-infrared lidar, the Echidna® validation instrument (EVI), built by CSIRO Australia, retrieves structural parameters of forest stands rapidly and accurately, and by merging multiple scans into a single point cloud, the lidar also provides 3-D stand reconstructions. Echidna lidar technology scans with pulses of light at 1064 nm wavelength and digitizes the full return waveform sufficiently finely to recover and distinguish the differing shapes of return pulses as they are scattered by leaves, trunks, and branches. Deployments in New England in 2007 and the southern Sierra Nevada of California in 2008 tested the ability of the instrument to retrieve mean tree diameter, stem count density (stems/ha), basal area, and above-ground woody biomass from single scans at points beneath the forest canopy. Parameters retrieved from five scans located within six 1-ha stand sites matched manually-measured parameters with values of R2 = 0.94-0.99 in New England and 0.92-0.95 in the Sierra Nevada. Retrieved leaf area index (LAI) values were similar to those of LAI-2000 and hemispherical photography. In New England, an analysis of variance showed that EVI-retrieved values were not significantly different from other methods (power = 0.84 or higher). In the Sierra, R2 = 0.96 and 0.81 for hemispherical photos and LAI-2000, respectively. Foliage profiles, which measure leaf area with canopy height, showed distinctly different shapes for the stands, depending on species composition and age structure. New England stand heights, obtained from foliage profiles, were not significantly different (power = 0.91) from RH100 values observed by LVIS in 2003. Three-dimensional stand reconstruction identifies one or more “hits” along the pulse path coupled with the peak return of each hit expressed as apparent reflectance. Returns are classified as trunk, leaf, or ground returns based on the shape of the return pulse and its location. These data provide a point

  10. Vertical profiles of the 3-D wind velocity retrieved from multiple wind lidars performing triple range-height-indicator scans

    NASA Astrophysics Data System (ADS)

    Debnath, Mithu; Valerio Iungo, G.; Ashton, Ryan; Brewer, W. Alan; Choukulkar, Aditya; Delgado, Ruben; Lundquist, Julie K.; Shaw, William J.; Wilczak, James M.; Wolfe, Daniel

    2017-02-01

    Vertical profiles of 3-D wind velocity are retrieved from triple range-height-indicator (RHI) scans performed with multiple simultaneous scanning Doppler wind lidars. This test is part of the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign carried out at the Boulder Atmospheric Observatory. The three wind velocity components are retrieved and then compared with the data acquired through various profiling wind lidars and high-frequency wind data obtained from sonic anemometers installed on a 300 m meteorological tower. The results show that the magnitude of the horizontal wind velocity and the wind direction obtained from the triple RHI scans are generally retrieved with good accuracy. However, poor accuracy is obtained for the evaluation of the vertical velocity, which is mainly due to its typically smaller magnitude and to the error propagation connected with the data retrieval procedure and accuracy in the experimental setup.

  11. Vertical profiles of the 3-D wind velocity retrieved from multiple wind lidars performing triple range-height-indicator scans

    DOE PAGES

    Debnath, Mithu; Iungo, G. Valerio; Ashton, Ryan; ...

    2017-02-06

    Vertical profiles of 3-D wind velocity are retrieved from triple range-height-indicator (RHI) scans performed with multiple simultaneous scanning Doppler wind lidars. This test is part of the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign carried out at the Boulder Atmospheric Observatory. The three wind velocity components are retrieved and then compared with the data acquired through various profiling wind lidars and high-frequency wind data obtained from sonic anemometers installed on a 300 m meteorological tower. The results show that the magnitude of the horizontal wind velocity and the wind direction obtained from the triple RHI scans are generally retrieved withmore » good accuracy. Furthermore, poor accuracy is obtained for the evaluation of the vertical velocity, which is mainly due to its typically smaller magnitude and to the error propagation connected with the data retrieval procedure and accuracy in the experimental setup.« less

  12. Vertical profiles of the 3-D wind velocity retrieved from multiple wind lidars performing triple range-height-indicator scans

    SciTech Connect

    Debnath, Mithu; Iungo, G. Valerio; Ashton, Ryan; Brewer, W. Alan; Choukulkar, Aditya; Delgado, Ruben; Lundquist, Julie K.; Shaw, William J.; Wilczak, James M.; Wolfe, Daniel

    2017-01-01

    Vertical profiles of 3-D wind velocity are retrieved from triple range-height-indicator (RHI) scans performed with multiple simultaneous scanning Doppler wind lidars. This test is part of the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign carried out at the Boulder Atmospheric Observatory. The three wind velocity components are retrieved and then compared with the data acquired through various profiling wind lidars and high-frequency wind data obtained from sonic anemometers installed on a 300 m meteorological tower. The results show that the magnitude of the horizontal wind velocity and the wind direction obtained from the triple RHI scans are generally retrieved with good accuracy. However, poor accuracy is obtained for the evaluation of the vertical velocity, which is mainly due to its typically smaller magnitude and to the error propagation connected with the data retrieval procedure and accuracy in the experimental setup.

  13. A Skeleton-Based 3D Shape Reconstruction of Free-Form Objects with Stereo Vision

    NASA Astrophysics Data System (ADS)

    Saini, Deepika; Kumar, Sanjeev

    2015-12-01

    In this paper, an efficient approach is proposed for recovering the 3D shape of a free-form object from its arbitrary pair of stereo images. In particular, the reconstruction problem is treated as the reconstruction of the skeleton and the external boundary of the object. The reconstructed skeleton is termed as the line-like representation or curve-skeleton of the 3D object. The proposed solution for object reconstruction is based on this evolved curve-skeleton. It is used as a seed for recovering shape of the 3D object, and the extracted boundary is used for terminating the growing process of the object. NURBS-skeleton is used to extract the skeleton of both views. Affine invariant property of the convex hulls is used to establish the correspondence between the skeletons and boundaries in the stereo images. In the growing process, a distance field is defined for each skeleton point as the smallest distance from that point to the boundary of the object. A sphere centered at a skeleton point of radius equal to the minimum distance to the boundary is tangential to the boundary. Filling in the spheres centered at each skeleton point reconstructs the object. Several results are presented in order to check the applicability and validity of the proposed algorithm.

  14. Recognition by Humans and Pigeons of Novel Views of 3-D Objects and Their Photographs

    ERIC Educational Resources Information Center

    Friedman, Alinda; Spetch, Marcia L.; Ferrey, Anne

    2005-01-01

    Humans and pigeons were trained to discriminate between 2 views of actual 3-D objects or their photographs. They were tested on novel views that were either within the closest rotational distance between the training views (interpolated) or outside of that range (extrapolated). When training views were 60? apart, pigeons, but not humans,…

  15. Recognition by Humans and Pigeons of Novel Views of 3-D Objects and Their Photographs

    ERIC Educational Resources Information Center

    Friedman, Alinda; Spetch, Marcia L.; Ferrey, Anne

    2005-01-01

    Humans and pigeons were trained to discriminate between 2 views of actual 3-D objects or their photographs. They were tested on novel views that were either within the closest rotational distance between the training views (interpolated) or outside of that range (extrapolated). When training views were 60? apart, pigeons, but not humans,…

  16. Testing remote sensing on artificial observations: impact of drizzle and 3-D cloud structure on effective radius retrievals

    NASA Astrophysics Data System (ADS)

    Zinner, T.; Wind, G.; Platnick, S.; Ackerman, A. S.

    2010-10-01

    Remote sensing of cloud effective particle size with passive sensors like the Moderate Resolution Imaging Spectroradiometer (MODIS) is an important tool for cloud microphysical studies. As a measure of the radiatively relevant droplet size, effective radius can be retrieved with different combinations of visible through shortwave and midwave infrared channels. In practice, retrieved effective radii from these combinations can be quite different. This difference is perhaps indicative of different penetration depths and path lengths for the spectral reflectances used. In addition, operational liquid water cloud retrievals are based on the assumption of a relatively narrow distribution of droplet sizes; the role of larger precipitation particles in these distributions is neglected. Therefore, possible explanations for the discrepancy in some MODIS spectral size retrievals could include 3-D radiative transport effects, including sub-pixel cloud inhomogeneity, and/or the impact of drizzle formation. For three cloud cases the possible factors of influence are isolated and investigated in detail by the use of simulated cloud scenes and synthetic satellite data: marine boundary layer cloud scenes from large eddy simulations (LES) with detailed microphysics are combined with Monte Carlo radiative transfer calculations that explicitly account for the detailed droplet size distributions as well as 3-D radiative transfer to simulate MODIS observations. The operational MODIS optical thickness and effective radius retrieval algorithm is applied to these and the results are compared to the given LES microphysics. We investigate two types of marine cloud situations each with and without drizzle from LES simulations: (1) a typical daytime stratocumulus deck at two times in the diurnal cycle and (2) one scene with scattered cumulus. Only small impact of drizzle formation on the retrieved domain average and on the differences between the three effective radius retrievals is noticed

  17. 3D-modeling of deformed halite hopper crystals by Object Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Leitner, Christoph; Hofmann, Peter; Marschallinger, Robert

    2014-12-01

    Object Based Image Analysis (OBIA) is an established method for analyzing multiscale and multidimensional imagery in a range of disciplines. In the present study this method was used for the 3D reconstruction of halite hopper crystals in a mudrock sample, based on Computed Tomography data. To quantitatively assess the reliability of OBIA results, they were benchmarked against a corresponding "gold standard", a reference 3D model of the halite crystals that was derived by manual expert digitization of the CT images. For accuracy assessment, classical per-scene statistics were extended to per-object statistics. The strength of OBIA was to recognize all objects similar to halite hopper crystals and in particular to eliminate cracks. Using a support vector machine (SVM) classifier on top of OBIA, unsuitable objects like halite crystal clusters, polyhalite-coated crystals and spherical halite crystals were effectively dismissed, but simultaneously the number of well-shaped halites was reduced.

  18. 4Pi fluorescence detection and 3D particle localization with a single objective

    PubMed Central

    Schnitzbauer, J.; McGorty, R.; Huang, B.

    2013-01-01

    Coherent detection through two opposing objectives (4Pi configuration) improves the precision of three-dimensional (3D) single-molecule localization substantially along the axial direction, but suffers from instrument complexity and maintenance difficulty. To address these issues, we have realized 4Pi fluorescence detection by sandwiching the sample between the objective and a mirror, and create interference of direct incidence and mirror-reflected signal at the camera with a spatial light modulator. Multifocal imaging using this single-objective mirror interference scheme offers improvement in the axial localization similar to the traditional 4Pi method. We have also devised several PSF engineering schemes to enable 3D localization with a single emitter image, offering better axial precision than normal single-objective localization methods such as astigmatic imaging. PMID:24105517

  19. Multiple Object Retrieval in Image Databases Using Hierarchical Segmentation Tree

    ERIC Educational Resources Information Center

    Chen, Wei-Bang

    2012-01-01

    The purpose of this research is to develop a new visual information analysis, representation, and retrieval framework for automatic discovery of salient objects of user's interest in large-scale image databases. In particular, this dissertation describes a content-based image retrieval framework which supports multiple-object retrieval. The…

  20. Multiple Object Retrieval in Image Databases Using Hierarchical Segmentation Tree

    ERIC Educational Resources Information Center

    Chen, Wei-Bang

    2012-01-01

    The purpose of this research is to develop a new visual information analysis, representation, and retrieval framework for automatic discovery of salient objects of user's interest in large-scale image databases. In particular, this dissertation describes a content-based image retrieval framework which supports multiple-object retrieval. The…

  1. Printing of metallic 3D micro-objects by laser induced forward transfer.

    PubMed

    Zenou, Michael; Kotler, Zvi

    2016-01-25

    Digital printing of 3D metal micro-structures by laser induced forward transfer under ambient conditions is reviewed. Recent progress has allowed drop on demand transfer of molten, femto-liter, metal droplets with a high jetting directionality. Such small volume droplets solidify instantly, on a nanosecond time scale, as they touch the substrate. This fast solidification limits their lateral spreading and allows the fabrication of high aspect ratio and complex 3D metal structures. Several examples of micron-scale resolution metal objects printed using this method are presented and discussed.

  2. 3D measurement of large-scale object using independent sensors

    NASA Astrophysics Data System (ADS)

    Yong, Liu; Yuan, Jia; Yong, Jiang; Luo, Xia

    2017-05-01

    Registration local sets of points for obtaining one final data set is a vital technology in 3D measurement of large-scale objects. In this paper, a new optical 3D measurement system using finge projection is presented, which is divided into four parts, including moving device, linking camera, stereo cameras and projector. Controlled by a computer, a sequence of local sets of points can be obtained based on temporal phase unwrapping and stereo vision. Two basic principles of place dependance and phase dependance are used to register these local sets of points into one final data set, and bundle adjustment is used to eliminate registration errors.

  3. Automatic pole-like object modeling via 3D part-based analysis of point cloud

    NASA Astrophysics Data System (ADS)

    He, Liu; Yang, Haoxiang; Huang, Yuchun

    2016-10-01

    Pole-like objects, including trees, lampposts and traffic signs, are indispensable part of urban infrastructure. With the advance of vehicle-based laser scanning (VLS), massive point cloud of roadside urban areas becomes applied in 3D digital city modeling. Based on the property that different pole-like objects have various canopy parts and similar trunk parts, this paper proposed the 3D part-based shape analysis to robustly extract, identify and model the pole-like objects. The proposed method includes: 3D clustering and recognition of trunks, voxel growing and part-based 3D modeling. After preprocessing, the trunk center is identified as the point that has local density peak and the largest minimum inter-cluster distance. Starting from the trunk centers, the remaining points are iteratively clustered to the same centers of their nearest point with higher density. To eliminate the noisy points, cluster border is refined by trimming boundary outliers. Then, candidate trunks are extracted based on the clustering results in three orthogonal planes by shape analysis. Voxel growing obtains the completed pole-like objects regardless of overlaying. Finally, entire trunk, branch and crown part are analyzed to obtain seven feature parameters. These parameters are utilized to model three parts respectively and get signal part-assembled 3D model. The proposed method is tested using the VLS-based point cloud of Wuhan University, China. The point cloud includes many kinds of trees, lampposts and other pole-like posters under different occlusions and overlaying. Experimental results show that the proposed method can extract the exact attributes and model the roadside pole-like objects efficiently.

  4. A Novel Medical Freehand Sketch 3D Model Retrieval Method by Dimensionality Reduction and Feature Vector Transformation

    PubMed Central

    Jing, Zhang; Sheng, Kang Bao

    2016-01-01

    To assist physicians to quickly find the required 3D model from the mass medical model, we propose a novel retrieval method, called DRFVT, which combines the characteristics of dimensionality reduction (DR) and feature vector transformation (FVT) method. The DR method reduces the dimensionality of feature vector; only the top M low frequency Discrete Fourier Transform coefficients are retained. The FVT method does the transformation of the original feature vector and generates a new feature vector to solve the problem of noise sensitivity. The experiment results demonstrate that the DRFVT method achieves more effective and efficient retrieval results than other proposed methods. PMID:27293478

  5. A Novel Medical Freehand Sketch 3D Model Retrieval Method by Dimensionality Reduction and Feature Vector Transformation.

    PubMed

    Jing, Zhang; Sheng, Kang Bao

    2015-01-01

    To assist physicians to quickly find the required 3D model from the mass medical model, we propose a novel retrieval method, called DRFVT, which combines the characteristics of dimensionality reduction (DR) and feature vector transformation (FVT) method. The DR method reduces the dimensionality of feature vector; only the top M low frequency Discrete Fourier Transform coefficients are retained. The FVT method does the transformation of the original feature vector and generates a new feature vector to solve the problem of noise sensitivity. The experiment results demonstrate that the DRFVT method achieves more effective and efficient retrieval results than other proposed methods.

  6. Identification and Detection of Simple 3D Objects with Severely Blurred Vision

    PubMed Central

    Kallie, Christopher S.; Legge, Gordon E.; Yu, Deyue

    2012-01-01

    Purpose. Detecting and recognizing three-dimensional (3D) objects is an important component of the visual accessibility of public spaces for people with impaired vision. The present study investigated the impact of environmental factors and object properties on the recognition of objects by subjects who viewed physical objects with severely reduced acuity. Methods. The experiment was conducted in an indoor testing space. We examined detection and identification of simple convex objects by normally sighted subjects wearing diffusing goggles that reduced effective acuity to 20/900. We used psychophysical methods to examine the effect on performance of important environmental variables: viewing distance (from 10–24 feet, or 3.05–7.32 m) and illumination (overhead fluorescent and artificial window), and object variables: shape (boxes and cylinders), size (heights from 2–6 feet, or 0.61–1.83 m), and color (gray and white). Results. Object identification was significantly affected by distance, color, height, and shape, as well as interactions between illumination, color, and shape. A stepwise regression analysis showed that 64% of the variability in identification could be explained by object contrast values (58%) and object visual angle (6%). Conclusions. When acuity is severely limited, illumination, distance, color, height, and shape influence the identification and detection of simple 3D objects. These effects can be explained in large part by the impact of these variables on object contrast and visual angle. Basic design principles for improving object visibility are discussed. PMID:23111613

  7. Blind Search of Faint Moving Objects in 3D Data Sets

    DTIC Science & Technology

    2013-09-01

    Blind Search of Faint Moving Objects in 3D Data Sets Phan Dao*, Peter Crabtree and Patrick McNicholl AFRL/RVBYC Tamar Payne Applied...using a simulated object signature superimposed on measured background, and show that the limiting magnitude can be improved by up to 6 visual...magnitudes. A quasi blind search algorithm that identifies the streak of photons, assuming no prior knowledge of orbital information, will be discussed

  8. The Visual Representation of 3D Object Orientation in Parietal Cortex

    PubMed Central

    Cowan, Noah J.; Angelaki, Dora E.

    2013-01-01

    An accurate representation of three-dimensional (3D) object orientation is essential for interacting with the environment. Where and how the brain visually encodes 3D object orientation remains unknown, but prior studies suggest the caudal intraparietal area (CIP) may be involved. Here, we develop rigorous analytical methods for quantifying 3D orientation tuning curves, and use these tools to the study the neural coding of surface orientation. Specifically, we show that single neurons in area CIP of the rhesus macaque jointly encode the slant and tilt of a planar surface, and that across the population, the distribution of preferred slant-tilts is not statistically different from uniform. This suggests that all slant-tilt combinations are equally represented in area CIP. Furthermore, some CIP neurons are found to also represent the third rotational degree of freedom that determines the orientation of the image pattern on the planar surface. Together, the present results suggest that CIP is a critical neural locus for the encoding of all three rotational degrees of freedom specifying an object's 3D spatial orientation. PMID:24305830

  9. Acquiring 3-D information about thick objects from differential interference contrast images using texture extraction.

    PubMed

    Sierra, Heidy; Brooks, Dana; DiMarzio, Charles

    2010-01-01

    The extraction of 3-D morphological information about thick objects is explored in this work. We extract this information from 3-D differential interference contrast (DIC) images by applying a texture detection method. Texture extraction methods have been successfully used in different applications to study biological samples. A 3-D texture image is obtained by applying a local entropy-based texture extraction method. The use of this method to detect regions of blastocyst mouse embryos that are used in assisted reproduction techniques such as in vitro fertilization is presented as an example. Results demonstrate the potential of using texture detection methods to improve morphological analysis of thick samples, which is relevant to many biomedical and biological studies. Fluorescence and optical quadrature microscope phase images are used for validation.

  10. Acquiring 3-D information about thick objects from differential interference contrast images using texture extraction

    NASA Astrophysics Data System (ADS)

    Sierra, Heidy; Brooks, Dana; Dimarzio, Charles

    2010-07-01

    The extraction of 3-D morphological information about thick objects is explored in this work. We extract this information from 3-D differential interference contrast (DIC) images by applying a texture detection method. Texture extraction methods have been successfully used in different applications to study biological samples. A 3-D texture image is obtained by applying a local entropy-based texture extraction method. The use of this method to detect regions of blastocyst mouse embryos that are used in assisted reproduction techniques such as in vitro fertilization is presented as an example. Results demonstrate the potential of using texture detection methods to improve morphological analysis of thick samples, which is relevant to many biomedical and biological studies. Fluorescence and optical quadrature microscope phase images are used for validation.

  11. 3D printing and IoT for personalized everyday objects in nursing and healthcare

    NASA Astrophysics Data System (ADS)

    Asano, Yoshihiro; Tanaka, Hiroya; Miyagawa, Shoko; Yoshioka, Junki

    2017-04-01

    Today, application of 3D printing technology for medical use is getting popular. It strongly helps to make complicated shape of body parts with functional materials. We can complement injured, weakened or lacked parts, and recover original shape and functions. However, these cases are mainly focusing on the symptom itself, not on everyday lives of patients. With life span extending, many of us will live a life with chronic disease for long time. Then, we should think about our living environment more carefully. For example, we can make personalized everyday objects and support their body and mind. Therefore, we use 3D printing for making everyday objects from nursing / healthcare perspective. In this project, we have 2 main research questions. The first one is how to make objects which patients really require. We invited many kinds of people such as engineer, nurses and patients to our research activity. Nurses can find patient's real demands firstly, and engineers support them with rapid prototyping. Finally, we found the best collaboration methodologies among nurses, engineers and patients. The second question is how to trace and evaluate usages of created objects. Apparently, it's difficult to monitor user's activity for a long time. So we're developing the IoT sensing system, which monitor activities remotely. We enclose a data logger which can lasts about one month with 3D printed objects. After one month, we can pick up the data from objects and understand how it has been used.

  12. Study of objective evaluation indicators of 3D visual fatigue based on RDS related tasks

    NASA Astrophysics Data System (ADS)

    Huang, Yi; Liu, Yue; Zou, Bochao; Wang, Yongtian; Cheng, Dewen

    2015-03-01

    Three dimensional (3D) displays have witnessed rapid progress in recent years because of its highly realistic sensation and sense of presence to humanist users. However, the comfort issues of 3D display are often reported and thus restrict its wide applications. In order to study the objective evaluation indicators associated with 3D visual fatigue, an experiment is designed in which subjects are required to accomplish a task realized with random dot stereogram (RDS). The aim of designing the task is to induce 3D visual fatigue of subjects and exclude the impacts of monocular depth cues. The visual acuity, critical flicker frequency (CFF), reaction time and correct rate of subjects during the experiment are recorded and analyzed. Correlation of the experimental data with the subjective evaluation scores is studied to find which indicator is closely related to 3D visual fatigue. Analysis of the experimental data shows that the trends of the correct rate are in line with the result of subjective evaluation.

  13. Systems in Development: Motor Skill Acquisition Facilitates 3D Object Completion

    PubMed Central

    Soska, Kasey C.; Adolph, Karen E.; Johnson, Scott P.

    2009-01-01

    How do infants learn to perceive the backs of objects that they see only from a limited viewpoint? Infants’ 3D object completion abilities emerge in conjunction with developing motor skills—independent sitting and visual-manual exploration. Twenty-eight 4.5- to 7.5-month-old infants were habituated to a limited-view object and tested with volumetrically complete and incomplete (hollow) versions of the same object. Parents reported infants’ sitting experience, and infants’ visual-manual exploration of objects was observed in a structured play session. Infants’ self-sitting experience and visual-manual exploratory skills predicted looking to the novel, incomplete object on the habituation task. Further analyses revealed that self-sitting facilitated infants’ visual inspection of objects while they manipulated them. The results are framed within a developmental systems approach, wherein infants’ sitting skill, multimodal object exploration, and object knowledge are linked in developmental time. PMID:20053012

  14. Recognition of 3D objects for autonomous mobile robot's navigation in automated shipbuilding

    NASA Astrophysics Data System (ADS)

    Lee, Hyunki; Cho, Hyungsuck

    2007-10-01

    Nowadays many parts of shipbuilding process are automated, but the painting process is not, because of the difficulty of automated on-line painting quality measurement, harsh painting environment and the difficulty of robot navigation. However, the painting automation is necessary, because it can provide consistent performance of painting film thickness. Furthermore, autonomous mobile robots are strongly required for flexible painting work. However, the main problem of autonomous mobile robot's navigation is that there are many obstacles which are not expressed in the CAD data. To overcome this problem, obstacle detection and recognition are necessary to avoid obstacles and painting work effectively. Until now many object recognition algorithms have been studied, especially 2D object recognition methods using intensity image have been widely studied. However, in our case environmental illumination does not exist, so these methods cannot be used. To overcome this, to use 3D range data must be used, but the problem of using 3D range data is high computational cost and long estimation time of recognition due to huge data base. In this paper, we propose a 3D object recognition algorithm based on PCA (Principle Component Analysis) and NN (Neural Network). In the algorithm, the novelty is that the measured 3D range data is transformed into intensity information, and then adopts the PCA and NN algorithm for transformed intensity information to reduce the processing time and make the data easy to handle which are disadvantages of previous researches of 3D object recognition. A set of experimental results are shown to verify the effectiveness of the proposed algorithm.

  15. Combining scale-space and similarity-based aspect graphs for fast 3D object recognition.

    PubMed

    Ulrich, Markus; Wiedemann, Christian; Steger, Carsten

    2012-10-01

    This paper describes an approach for recognizing instances of a 3D object in a single camera image and for determining their 3D poses. A hierarchical model is generated solely based on the geometry information of a 3D CAD model of the object. The approach does not rely on texture or reflectance information of the object's surface, making it useful for a wide range of industrial and robotic applications, e.g., bin-picking. A hierarchical view-based approach that addresses typical problems of previous methods is applied: It handles true perspective, is robust to noise, occlusions, and clutter to an extent that is sufficient for many practical applications, and is invariant to contrast changes. For the generation of this hierarchical model, a new model image generation technique by which scale-space effects can be taken into account is presented. The necessary object views are derived using a similarity-based aspect graph. The high robustness of an exhaustive search is combined with an efficient hierarchical search. The 3D pose is refined by using a least-squares adjustment that minimizes geometric distances in the image, yielding a position accuracy of up to 0.12 percent with respect to the object distance, and an orientation accuracy of up to 0.35 degree in our tests. The recognition time is largely independent of the complexity of the object, but depends mainly on the range of poses within which the object may appear in front of the camera. For efficiency reasons, the approach allows the restriction of the pose range depending on the application. Typical runtimes are in the range of a few hundred ms.

  16. Shaping functional nano-objects by 3D confined supramolecular assembly.

    PubMed

    Deng, Renhua; Liang, Fuxin; Li, Weikun; Liu, Shanqin; Liang, Ruijing; Cai, Mingle; Yang, Zhenzhong; Zhu, Jintao

    2013-12-20

    Nano-objects are generated through 3D confined supramolecular assembly, followed by a sequential disintegration by rupturing the hydrogen bonding. The shape of the nano-objects is tunable, ranging from nano-disc, nano-cup, to nano-toroid. The nano-objects are pH-responsive. Functional materials for example inorganic or metal nanoparticles are easily complexed onto the external surface, to extend both composition and microstructure of the nano-objects. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Operational Retrieval of aerosol optical depth over Indian subcontinent and Indian Ocean using INSAT-3D/Imager product validation

    NASA Astrophysics Data System (ADS)

    Mishra, M. K.; Rastogi, G.; Chauhan, P.

    2014-11-01

    Aerosol optical depth (AOD) over Indian subcontinent and Indian Ocean region is derived operationally for the first time from the geostationary earth orbit (GEO) satellite INSAT-3D Imager data at 0.65 μm wavelength. Single visible channel algorithm based on clear sky composites gives larger retrieval error in AOD than other multiple channel algorithms due to errors in estimating surface reflectance and atmospheric property. However, since MIR channel signal is insensitive to the presence of most aerosols, therefore in present study, AOD retrieval algorithm employs both visible (centred at 0.65 μm) and mid-infrared (MIR) band (centred at 3.9 μm) measurements, and allows us to monitor transport of aerosols at higher temporal resolution. Comparisons made between INSAT-3D derived AOD (τI) and MODIS derived AOD (τM) co-located in space (at 1° resolution) and time during January, February and March (JFM) 2014 encompasses 1165, 1052 and 900 pixels, respectively. Good agreement found between τI and τM during JFM 2014 with linear correlation coefficients (R) of 0.87, 0.81 and 0.76, respectively. The extensive validation made during JFM 2014 encompasses 215 co-located AOD in space and time derived by INSAT 3D (τI) and 10 sun-photometers (τA) that includes 9 AERONET (Aerosol Robotic Network) and 1 handheld sun-photometer site. INSAT-3D derived AOD i.e. τI, is found within the retrieval errors of τI = ±0.07 ±0.15τA with linear correlation coefficient (R) of 0.90 and root mean square error equal (RMSE) to 0.06. Present work shows that INSAT-3D aerosol products can be used quantitatively in many applications with caution for possible residual clouds, snow/ice, and water contamination.

  18. Reconstruction of 3D solid objects from 2D orthographic views

    NASA Astrophysics Data System (ADS)

    Hosomura, Tsukasa

    1995-09-01

    The purpose of this paper was to design an automatic system for transform 2D orthographic views to 3D solid objects. The input drawing contains geometric information of lines and circles. The reconstructed objects may be boxes, cylinders and their composites. This system used AutoCAD as a drawing tool. An input 2D orthographic view was created by using this package drawing editor. By using data interchange file (DXF) capabilities the application programs can access AutoCAD database. The script facility was used to execute the set of drawing commands which will create a continuous running display for output. The system was implemented in 7 steps. First, the 2D drawing was created and saved in ASCII code. Then DXF file was created and extracted into drawing commands. The transitional sweep operation was used to reconstruct subparts. The relationships between subparts are utilized to compose the final part. Finally the 3D solid object was displayed.

  19. An approach to detecting deliberately introduced defects and micro-defects in 3D printed objects

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2017-05-01

    In prior work, Zeltmann, et al. demonstrated the negative impact that can be created by defects of various sizes in 3D printed objects. These defects may make the object unsuitable for its application or even present a hazard, if the object is being used for a safety-critical application. With the uses of 3D printing proliferating and consumer access to printers increasing, the desire of a nefarious individual or group to subvert the desired printing quality and safety attributes of a printer or printed object must be considered. Several different approaches to subversion may exist. Attackers may physically impair the functionality of the printer or launch a cyber-attack. Detecting introduced defects, from either attack, is critical to maintaining public trust in 3D printed objects and the technology. This paper presents an alternate approach. It applies a quality assurance technology based on visible light sensing to this challenge and assesses its capability for detecting introduced defects of multiple sizes.

  20. The effect of background and illumination on color identification of real, 3D objects

    PubMed Central

    Allred, Sarah R.; Olkkonen, Maria

    2013-01-01

    For the surface reflectance of an object to be a useful cue to object identity, judgments of its color should remain stable across changes in the object's environment. In 2D scenes, there is general consensus that color judgments are much more stable across illumination changes than background changes. Here we investigate whether these findings generalize to real 3D objects. Observers made color matches to cubes as we independently varied both the illumination impinging on the cube and the 3D background of the cube. As in 2D scenes, we found relatively high but imperfect stability of color judgments under an illuminant shift. In contrast to 2D scenes, we found that background had little effect on average color judgments. In addition, variability of color judgments was increased by an illuminant shift and decreased by embedding the cube within a background. Taken together, these results suggest that in real 3D scenes with ample cues to object segregation, the addition of a background may improve stability of color identification. PMID:24273521

  1. Detection and Purging of Specular Reflective and Transparent Object Influences in 3d Range Measurements

    NASA Astrophysics Data System (ADS)

    Koch, R.; May, S.; Nüchter, A.

    2017-02-01

    3D laser scanners are favoured sensors for mapping in mobile service robotics at indoor and outdoor applications, since they deliver precise measurements at a wide scanning range. The resulting maps are detailed since they have a high resolution. Based on these maps robots navigate through rough terrain, fulfil advanced manipulation, and inspection tasks. In case of specular reflective and transparent objects, e.g., mirrors, windows, shiny metals, the laser measurements get corrupted. Based on the type of object and the incident angle of the incoming laser beam there are three results possible: a measurement point on the object plane, a measurement behind the object plane, and a measurement of a reflected object. It is important to detect such situations to be able to handle these corrupted points. This paper describes why it is difficult to distinguish between specular reflective and transparent surfaces. It presents a 3DReflection- Pre-Filter Approach to identify specular reflective and transparent objects in point clouds of a multi-echo laser scanner. Furthermore, it filters point clouds from influences of such objects and extract the object properties for further investigations. Based on an Iterative-Closest-Point-algorithm reflective objects are identified. Object surfaces and points behind surfaces are masked according to their location. Finally, the processed point cloud is forwarded to a mapping module. Furthermore, the object surface corners and the type of the surface is broadcasted. Four experiments demonstrate the usability of the 3D-Reflection-Pre-Filter. The first experiment was made in a empty room containing a mirror, the second experiment was made in a stairway containing a glass door, the third experiment was made in a empty room containing two mirrors, the fourth experiment was made in an office room containing a mirror. This paper demonstrate that for single scans the detection of specular reflective and transparent objects in 3D is possible. It

  2. Learning the 3-D structure of objects from 2-D views depends on shape, not format

    PubMed Central

    Tian, Moqian; Yamins, Daniel; Grill-Spector, Kalanit

    2016-01-01

    Humans can learn to recognize new objects just from observing example views. However, it is unknown what structural information enables this learning. To address this question, we manipulated the amount of structural information given to subjects during unsupervised learning by varying the format of the trained views. We then tested how format affected participants' ability to discriminate similar objects across views that were rotated 90° apart. We found that, after training, participants' performance increased and generalized to new views in the same format. Surprisingly, the improvement was similar across line drawings, shape from shading, and shape from shading + stereo even though the latter two formats provide richer depth information compared to line drawings. In contrast, participants' improvement was significantly lower when training used silhouettes, suggesting that silhouettes do not have enough information to generate a robust 3-D structure. To test whether the learned object representations were format-specific or format-invariant, we examined if learning novel objects from example views transfers across formats. We found that learning objects from example line drawings transferred to shape from shading and vice versa. These results have important implications for theories of object recognition because they suggest that (a) learning the 3-D structure of objects does not require rich structural cues during training as long as shape information of internal and external features is provided and (b) learning generates shape-based object representations independent of the training format. PMID:27153196

  3. A novel iterative computation algorithm for Kinoform of 3D object

    NASA Astrophysics Data System (ADS)

    Jiang, Xiao-yu; Chuang, Pei; Wang, Xi; Zong, Yantao

    2012-11-01

    A novel method for computing kinoform of 3D object based on traditional iterate Fourier transform algorithm(IFTA) is proposed in this paper. Kinoform is a special kind of computer-generated holograms (CGH) which has very high diffraction efficiency since it only modulates the phase of illuminated light and doesn't have cross-interference from conjugate image. The traditional IFTA arithmetic assumes that reconstruction image is in infinity area(Fraunhofer diffraction region), and ignores the deepness of 3D object ,so it can only calculate two-dimensional kinoform. The proposed algorithm in this paper divides three-dimensional object into several object planes in deepness and treat every object plane as a target image then iterate computation is carried out between one input plane(kinoform) and multi-output planes(reconstruction images) .A space phase factor is added into iterate process to represent depth characters of 3D object, then reconstruction images is in Fresnel diffraction region. Optics reconstructed experiment of kinoform computed by this method is realized based on Liquid Crystals on Silicon (LCoS) Spatial Light Modulator(SLM). Mean Square Error(MSE) and Structure Similarity(SSIM) between original and reconstruction image is used to evaluate this method. The experimental result shows that this algorithm speed is fast and the result kinoform can reconstruct the object in different plane with high precision under the illumination of plane wave. The reconstruction images provide space sense of three-dimensional visual effect. At last, the influence of space and shelter between different object planes to reconstruction image is also discussed in the experiment.

  4. 220GHz wideband 3D imaging radar for concealed object detection technology development and phenomenology studies

    NASA Astrophysics Data System (ADS)

    Robertson, Duncan A.; Macfarlane, David G.; Bryllert, Tomas

    2016-05-01

    We present a 220 GHz 3D imaging `Pathfinder' radar developed within the EU FP7 project CONSORTIS (Concealed Object Stand-Off Real-Time Imaging for Security) which has been built to address two objectives: (i) to de-risk the radar hardware development and (ii) to enable the collection of phenomenology data with ~1 cm3 volumetric resolution. The radar combines a DDS-based chirp generator and self-mixing multiplier technology to achieve a 30 GHz bandwidth chirp with such high linearity that the raw point response is close to ideal and only requires minor nonlinearity compensation. The single transceiver is focused with a 30 cm lens mounted on a gimbal to acquire 3D volumetric images of static test targets and materials.

  5. Cryo-EM structure of a 3D DNA-origami object

    PubMed Central

    Bai, Xiao-chen; Martin, Thomas G.; Scheres, Sjors H. W.; Dietz, Hendrik

    2012-01-01

    A key goal for nanotechnology is to design synthetic objects that may ultimately achieve functionalities known today only from natural macromolecular complexes. Molecular self-assembly with DNA has shown potential for creating user-defined 3D scaffolds, but the level of attainable positional accuracy has been unclear. Here we report the cryo-EM structure and a full pseudoatomic model of a discrete DNA object that is almost twice the size of a prokaryotic ribosome. The structure provides a variety of stable, previously undescribed DNA topologies for future use in nanotechnology and experimental evidence that discrete 3D DNA scaffolds allow the positioning of user-defined structural motifs with an accuracy that is similar to that observed in natural macromolecules. Thereby, our results indicate an attractive route to fabricate nanoscale devices that achieve complex functionalities by DNA-templated design steered by structural feedback. PMID:23169645

  6. Fourier Domain Iterative Approach to Optical Sectioning of 3d Translucent Objects for Ophthalmology Purposes

    NASA Astrophysics Data System (ADS)

    Razguli, A. V.; Iroshnikov, N. G.; Larichev, A. V.; Romanenko, T. E.; Goncharov, A. S.

    2017-05-01

    In this paper we deal with the problem of optical sectioning. This is a post processing step while investigating of 3D translucent medical objects based on rapid refocusing of the imaging system by the adaptive optics technique. Each image, captured in focal plane, can be represented as the sum of in-focus true section and out-of-focus images of the neighboring sections of the depth that are undesirable in the subsequent reconstruction of 3D object. The problem of optical sectioning under consideration is to elaborate a robust approach capable of obtaining a stack of cross section images purified from such distortions. For a typical sectioning statement arising in ophthalmology we propose a local iterative method in Fourier spectral plane. Compared to the non-local constant parameter selection for the whole spectral domain, the method demonstrates both improved sectioning results and a good level of scalability when implemented on multi-core CPUs.

  7. Encountered-type Haptic Interface for Representation of Shape and Rigidity of 3D Virtual Objects.

    PubMed

    Takizawa, Naoki; Yano, Hiroaki; Iwata, Hiroo; Oshiro, Yukio; Ohkohch, Nobuhiro

    2017-08-17

    This paper describes the development of an encountered-type haptic interface that can generate the physical characteristics, such as shape and rigidity, of three-dimensional (3D) virtual objects using an array of newly developed non-expandable balloons. To alter the rigidity of each non-expandable balloon, the volume of air in it is controlled through a linear actuator and a pressure sensor based on Hooke's law. Furthermore, to change the volume of each balloon, its exposed surface area is controlled by using another linear actuator with a trumpet-shaped tube. A position control mechanism is constructed to display virtual objects using the balloons. The 3D position of each balloon is controlled using a flexible tube and a string. The performance of the system is tested and the results confirm the effectiveness of the proposed principle and interface.

  8. Encryption of digital hologram of 3-D object by virtual optics.

    PubMed

    Kim, Hyun; Kim, Do-Hyung; Lee, Yeon

    2004-10-04

    We present a simple technique to encrypt a digital hologram of a three-dimensional (3-D) object into a stationary white noise by use of virtual optics and then to decrypt it digitally. In this technique the digital hologram is encrypted by our attaching a computer-generated random phase key to it and then forcing them to Fresnel propagate to an arbitrary plane with an illuminating plane wave of a given wavelength. It is shown in experiments that the proposed system is robust to blind decryptions without knowing the correct propagation distance, wavelength, and phase key used in the encryption. Signal-to-noise ratio (SNR) and mean-square-error (MSE) of the reconstructed 3-D object are calculated for various decryption distances and wavelengths, and partial use of the correct phase key.

  9. Encryption of digital hologram of 3-D object by virtual optics

    NASA Astrophysics Data System (ADS)

    Kim, Hyun; Kim, Do-Hyung; Lee, Yeon H.

    2004-10-01

    We present a simple technique to encrypt a digital hologram of a three-dimensional (3-D) object into a stationary white noise by use of virtual optics and then to decrypt it digitally. In this technique the digital hologram is encrypted by our attaching a computer-generated random phase key to it and then forcing them to Fresnel propagate to an arbitrary plane with an illuminating plane wave of a given wavelength. It is shown in experiments that the proposed system is robust to blind decryptions without knowing the correct propagation distance, wavelength, and phase key used in the encryption. Signal-to-noise ratio (SNR) and mean-square-error (MSE) of the reconstructed 3-D object are calculated for various decryption distances and wavelengths, and partial use of the correct phase key.

  10. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    PubMed

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  11. Artificial neural networks and model-based recognition of 3-D objects from 2-D images

    NASA Astrophysics Data System (ADS)

    Chao, Chih-Ho; Dhawan, Atam P.

    1992-09-01

    A computer vision system is developed for 3-D object recognition using artificial neural networks and a knowledge-based top-down feedback analysis system. This computer vision system can adequately analyze an incomplete edge map provided by a low-level processor for 3-D representation and recognition using key features. The key features are selected using a priority assignment and then used in an artificial neural network for matching with model key features. The result of such matching is utilized in generating the model-driven top-down feedback analysis. From the incomplete edge map we try to pick a candidate pattern utilizing the key feature priority assignment. The highest priority is given for the most connected node and associated features. The features are space invariant structures and sets of orientation for edge primitives. These features are now mapped into real numbers. A Hopfield network is then applied with two levels of matching to reduce the search time. The first match is to choose the class of possible model, the second match is then to find the model closest to the data patterns. This model is then rotated in 3-D to find the best match with the incomplete edge patterns and to provide the additional features in 3-D. In the case of multiple objects, a dynamically interconnected search strategy is designed to recognize objects using one pattern at a time. This strategy is also useful in recognizing occluded objects. The experimental results presented show the capability and effectiveness of this system.

  12. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    PubMed Central

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  13. 3D Modeling of Interior Building Environments and Objects from Noisy Sensor Suites

    DTIC Science & Technology

    2015-05-14

    3D Modeling of Interior Building Environments and Objects from Noisy Sensor Suites Eric Turner Electrical Engineering and Computer Sciences ... Computer Sciences ,Berkeley,CA,94720 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S...Philosophy in Engineering – Electrical Engineering and Computer Sciences in the Graduate Division of the University of California, Berkeley Committee in

  14. The representation of moving 3-D objects in apparent motion perception.

    PubMed

    Hidaka, Souta; Kawachi, Yousuke; Gyoba, Jiro

    2009-08-01

    In the present research, we investigated the depth information contained in the representations of apparently moving 3-D objects. By conducting three experiments, we measured the magnitude of representational momentum (RM) as an index of the consistency of an object's representation. Experiment 1A revealed that RM magnitude was greater when shaded, convex, apparently moving objects shifted to a flat circle than when they shifted to a shaded, concave, hemisphere. The difference diminished when the apparently moving objects were concave hemispheres (Experiment 1B). Using luminance-polarized circles, Experiment 2 confirmed that these results were not due to the luminance information of shading. Experiment 3 demonstrated that RM magnitude was greater when convex apparently moving objects shifted to particular blurred convex hemispheres with low-pass filtering than when they shifted to concave hemispheres. These results suggest that the internal object's representation in apparent motion contains incomplete depth information intermediate between that of 2-D and 3-D objects, particularly with regard to convexity information with low-spatial-frequency components.

  15. Rapid object indexing using locality sensitive hashing and joint 3D-signature space estimation.

    PubMed

    Matei, Bogdan; Shan, Ying; Sawhney, Harpreet S; Tan, Yi; Kumar, Rakesh; Huber, Daniel; Hebert, Martial

    2006-07-01

    We propose a new method for rapid 3D object indexing that combines feature-based methods with coarse alignment-based matching techniques. Our approach achieves a sublinear complexity on the number of models, maintaining at the same time a high degree of performance for real 3D sensed data that is acquired in largely uncontrolled settings. The key component of our method is to first index surface descriptors computed at salient locations from the scene into the whole model database using the Locality Sensitive Hashing (LSH), a probabilistic approximate nearest neighbor method. Progressively complex geometric constraints are subsequently enforced to further prune the initial candidates and eliminate false correspondences due to inaccuracies in the surface descriptors and the errors of the LSH algorithm. The indexed models are selected based on the MAP rule using posterior probability of the models estimated in the joint 3D-signature space. Experiments with real 3D data employing a large database of vehicles, most of them very similar in shape, containing 1,000,000 features from more than 365 models demonstrate a high degree of performance in the presence of occlusion and obscuration, unmodeled vehicle interiors and part articulations, with an average processing time between 50 and 100 seconds per query.

  16. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    PubMed Central

    Cengiz, Kubra

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468

  17. An object-oriented simulator for 3D digital breast tomosynthesis imaging system.

    PubMed

    Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.

  18. Statistical and neural network classifiers in model-based 3-D object recognition

    NASA Astrophysics Data System (ADS)

    Newton, Scott C.; Nutter, Brian S.; Mitra, Sunanda

    1991-02-01

    For autonomous machines equipped with vision capabilities and in a controlled environment 3-D model-based object identification methodologies will in general solve rigid body recognition problems. In an uncontrolled environment however several factors pose difficulties for correct identification. We have addressed the problem of 3-D object recognition using a number of methods including neural network classifiers and a Bayesian-like classifier for matching image data with model projection-derived data [1 21. Neural network classifiers used began operation as simple feature vector classifiers. However unmodelled signal behavior was learned with additional samples yielding great improvement in classification rates. The model analysis drastically shortened training time of both classification systems. In an environment where signal behavior is not accurately modelled two separate forms of learning give the systems the ability to update estimates of this behavior. Required of course are sufficient samples to learn this new information. Given sufficient information and a well-controlled environment identification of 3-D objects from a limited number of classes is indeed possible. 1.

  19. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    PubMed

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-04-20

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  20. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    PubMed Central

    Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  1. Blind robust watermarking schemes for copyright protection of 3D mesh objects.

    PubMed

    Zafeiriou, Stefanos; Tefas, Anastasios; Pitas, Ioannis

    2005-01-01

    In this paper, two novel methods suitable for blind 3D mesh object watermarking applications are proposed. The first method is robust against 3D rotation, translation, and uniform scaling. The second one is robust against both geometric and mesh simplification attacks. A pseudorandom watermarking signal is cast in the 3D mesh object by deforming its vertices geometrically, without altering the vertex topology. Prior to watermark embedding and detection, the object is rotated and translated so that its center of mass and its principal component coincide with the origin and the z-axis of the Cartesian coordinate system. This geometrical transformation ensures watermark robustness to translation and rotation. Robustness to uniform scaling is achieved by restricting the vertex deformations to occur only along the r coordinate of the corresponding (r, theta, phi) spherical coordinate system. In the first method, a set of vertices that correspond to specific angles theta is used for watermark embedding. In the second method, the samples of the watermark sequence are embedded in a set of vertices that correspond to a range of angles in the theta domain in order to achieve robustness against mesh simplifications. Experimental results indicate the ability of the proposed method to deal with the aforementioned attacks.

  2. Combining depth and gray images for fast 3D object recognition

    NASA Astrophysics Data System (ADS)

    Pan, Wang; Zhu, Feng; Hao, Yingming

    2016-10-01

    Reliable and stable visual perception systems are needed for humanoid robotic assistants to perform complex grasping and manipulation tasks. The recognition of the object and its precise 6D pose are required. This paper addresses the challenge of detecting and positioning a textureless known object, by estimating its complete 6D pose in cluttered scenes. A 3D perception system is proposed in this paper, which can robustly recognize CAD models in cluttered scenes for the purpose of grasping with a mobile manipulator. Our approach uses a powerful combination of two different camera technologies, Time-Of-Flight (TOF) and RGB, to segment the scene and extract objects. Combining the depth image and gray image to recognize instances of a 3D object in the world and estimate their 3D poses. The full pose estimation process is based on depth images segmentation and an efficient shape-based matching. At first, the depth image is used to separate the supporting plane of objects from the cluttered background. Thus, cluttered backgrounds are circumvented and the search space is extremely reduced. And a hierarchical model based on the geometry information of a priori CAD model of the object is generated in the offline stage. Then using the hierarchical model we perform a shape-based matching in 2D gray images. Finally, we validate the proposed method in a number of experiments. The results show that utilizing depth and gray images together can reach the demand of a time-critical application and reduce the error rate of object recognition significantly.

  3. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Astrophysics Data System (ADS)

    Nandhakumar, N.; Smith, Philip W.

    1993-12-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  4. Applying Mean-Shift - Clustering for 3D object detection in remote sensing data

    NASA Astrophysics Data System (ADS)

    Simon, Jürgen-Lorenz; Diederich, Malte; Troemel, Silke

    2013-04-01

    The timely warning and forecasting of high-impact weather events is crucial for life, safety and economy. Therefore, the development and improvement of methods for detection and nowcasting / short-term forecasting of these events is an ongoing research question. A new 3D object detection and tracking algorithm is presented. Within the project "object-based analysis and seamless predictin (OASE)" we address a better understanding and forecasting of convective events based on the synergetic use of remotely sensed data and new methods for detection, nowcasting, validation and assimilation. In order to gain advanced insight into the lifecycle of convective cells, we perform an object-detection on a new high-resolution 3D radar- and satellite based composite and plan to track the detected objects over time, providing us with a model of the lifecycle. The insights in the lifecycle will be used in order to improve prediction of convective events in the nowcasting time scale, as well as a new type of data to be assimilated into numerical weather models, thus seamlessly bridging the gap between nowcasting and NWP.. The object identification (or clustering) is performed using a technique borrowed from computer vision, called mean-shift clustering. Mean-Shift clustering works without many of the parameterizations or rigid threshold schemes employed by many existing schemes (e. g. KONRAD, TITAN, Trace-3D), which limit the tracking to fully matured, convective cells of significant size and/or strength. Mean-Shift performs without such limiting definitions, providing a wider scope for studying larger classes of phenomena and providing a vehicle for research into the object definition itself. Since the mean-shift clustering technique could be applied on many types of remote-sensing and model data for object detection, it is of general interest to the remote sensing and modeling community. The focus of the presentation is the introduction of this technique and the results of its

  5. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Smith, Philip W.

    1993-01-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  6. The 3D Radiative Effects of Clouds in Aerosol Retrieval: Can we Remove Them?

    SciTech Connect

    Kassianov, Evgueni I.; Ovchinnikov, Mikhail; Berg, Larry K.; McFarlane, Sally A.; Flynn, Connor J.; Ferrare, Richard; Hostetler, Chris A.

    2009-09-30

    We outline a new method, called the ratio method, developed to retrieve aerosol optical depth (AOD) under broken cloud conditions and present validation results from sensitivity and case studies. Results of the sensitivity study demonstrate that the ratio method, which exploits ratios of reflectances in the visible spectral range, has the potential for accurate AOD retrievals under different observational conditions and random errors in input data. Also, we examine the performance of the ratio method using aircraft data collected during the Cloud and Land Surface Interaction Campaign (CLASIC) and the Cumulus Humilis Aerosol Processing Study (CHAPS). Results of the case study suggest that the ratio method has the ability to retrieve AOD from multi-spectral aircraft observations of the reflected solar radiation.

  7. 3D HUMAN MOTION RETRIEVAL BASED ON HUMAN HIERARCHICAL INDEX STRUCTURE

    PubMed Central

    Guo, X.

    2013-01-01

    With the development and wide application of motion capture technology, the captured motion data sets are becoming larger and larger. For this reason, an efficient retrieval method for the motion database is very important. The retrieval method needs an appropriate indexing scheme and an effective similarity measure that can organize the existing motion data well. In this paper, we represent a human motion hierarchical index structure and adopt a nonlinear method to segment motion sequences. Based on this, we extract motion patterns and then we employ a fast similarity measure algorithm for motion pattern similarity computation to efficiently retrieve motion sequences. The experiment results show that the approach proposed in our paper is effective and efficient. PMID:24744481

  8. 3D object optonumerical acquisition methods for CAD/CAM and computer graphics systems

    NASA Astrophysics Data System (ADS)

    Sitnik, Robert; Kujawinska, Malgorzata; Pawlowski, Michal E.; Woznicki, Jerzy M.

    1999-08-01

    The creation of a virtual object for CAD/CAM and computer graphics on the base of data gathered by full-field optical measurement of 3D object is presented. The experimental co- ordinates are alternatively obtained by combined fringe projection/photogrammetry based system or fringe projection/virtual markers setup. The new and fully automatic procedure which process the cloud of measured points into triangular mesh accepted by CAD/CAM and computer graphics systems is presented. Its applicability for various classes of objects is tested including the error analysis of virtual objects generated. The usefulness of the method is proved by applying the virtual object in rapid prototyping system and in computer graphics environment.

  9. Searching surface orientation of microscopic objects for accurate 3D shape recovery.

    PubMed

    Shim, Seong-O; Mahmood, Muhammad Tariq; Choi, Tae-Sun

    2012-05-01

    In this article, we propose a new shape from focus (SFF) method to estimate 3D shape of microscopic objects using surface orientation cue of each object patch. Most of the SFF algorithms compute the focus value of a pixel from the information of neighboring pixels lying on the same image frame based on an assumption that the small object patch corresponding to the small neighborhood of a pixel is a plane parallel to the focal plane. However, this assumption fails in the optics with limited depth of field where the neighboring pixels of an image have different degree of focus. To overcome this problem, we try to search the surface orientation of the small object patch corresponding to each pixel in the image sequence. Searching of the surface orientation is done indirectly by principal component analysis. Then, the focus value of each pixel is computed from the neighboring pixels lying on the surface perpendicular to the corresponding surface orientation. Experimental results on synthetic and real microscopic objects show that the proposed method produces more accurate 3D shape in comparison to the existing techniques.

  10. Enhanced Visual-Attention Model for Perceptually Improved 3D Object Modeling in Virtual Environments

    NASA Astrophysics Data System (ADS)

    Chagnon-Forget, Maude; Rouhafzay, Ghazal; Cretu, Ana-Maria; Bouchard, Stéphane

    2016-12-01

    Three-dimensional object modeling and interactive virtual environment applications require accurate, but compact object models that ensure real-time rendering capabilities. In this context, the paper proposes a 3D modeling framework employing visual attention characteristics in order to obtain compact models that are more adapted to human visual capabilities. An enhanced computational visual attention model with additional saliency channels, such as curvature, symmetry, contrast and entropy, is initially employed to detect points of interest over the surface of a 3D object. The impact of the use of these supplementary channels is experimentally evaluated. The regions identified as salient by the visual attention model are preserved in a selectively-simplified model obtained using an adapted version of the QSlim algorithm. The resulting model is characterized by a higher density of points in the salient regions, therefore ensuring a higher perceived quality, while at the same time ensuring a less complex and more compact representation for the object. The quality of the resulting models is compared with the performance of other interest point detectors incorporated in a similar manner in the simplification algorithm. The proposed solution results overall in higher quality models, especially at lower resolutions. As an example of application, the selectively-densified models are included in a continuous multiple level of detail (LOD) modeling framework, in which an original neural-network solution selects the appropriate size and resolution of an object.

  11. Localization of significant 3D objects in 2D images for generic vision tasks

    NASA Astrophysics Data System (ADS)

    Mokhtari, Marielle; Bergevin, Robert

    1995-10-01

    Computer vision experiments are not very often linked to practical applications but rather deal with typical laboratory experiments under controlled conditions. For instance, most object recognition experiments are based on specific models used under limitative constraints. Our work proposes a general framework for rapidly locating significant 3D objects in 2D static images of medium to high complexity, as a prerequisite step to recognition and interpretation when no a priori knowledge of the contents of the scene is assumed. In this paper, a definition of generic objects is proposed, covering the structures that are implied in the image. Under this framework, it must be possible to locate generic objects and assign a significance figure to each one from any image fed to the system. The most significant structure in a given image becomes the focus of interest of the system determining subsequent tasks (like subsequent robot moves, image acquisitions and processing). A survey of existing strategies for locating 3D objects in 2D images is first presented and our approach is defined relative to these strategies. Perceptual grouping paradigms leading to the structural organization of the components of an image are at the core of our approach.

  12. Fast and flexible 3D object recognition solutions for machine vision applications

    NASA Astrophysics Data System (ADS)

    Effenberger, Ira; Kühnle, Jens; Verl, Alexander

    2013-03-01

    In automation and handling engineering, supplying work pieces between different stages along the production process chain is of special interest. Often the parts are stored unordered in bins or lattice boxes and hence have to be separated and ordered for feeding purposes. An alternative to complex and spacious mechanical systems such as bowl feeders or conveyor belts, which are typically adapted to the parts' geometry, is using a robot to grip the work pieces out of a bin or from a belt. Such applications are in need of reliable and precise computer-aided object detection and localization systems. For a restricted range of parts, there exists a variety of 2D image processing algorithms that solve the recognition problem. However, these methods are often not well suited for the localization of randomly stored parts. In this paper we present a fast and flexible 3D object recognizer that localizes objects by identifying primitive features within the objects. Since technical work pieces typically consist to a substantial degree of geometric primitives such as planes, cylinders and cones, such features usually carry enough information in order to determine the position of the entire object. Our algorithms use 3D best-fitting combined with an intelligent data pre-processing step. The capability and performance of this approach is shown by applying the algorithms to real data sets of different industrial test parts in a prototypical bin picking demonstration system.

  13. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  14. 3D Object Recognition of a Robotic Navigation Aid for the Visually Impaired.

    PubMed

    Ye, Cang; Qian, Xiangfei

    2017-09-01

    This paper presents a 3D object recognition method and its implementation on a Robotic Navigation Aid (RNA) to allow real-time detection of indoor structural objects for the navigation of a blind person. The method segments a point cloud into numerous planar patches and extracts their Inter-Plane Relationships (IPRs). Based on the existing IPRs of the object models, the method defines 6 High Level Features (HLFs) and determines the HLFs for each patch. A Gaussian-Mixture-Model-based plane classifier is then devised to classify each planar patch into one belonging to a particular object model. Finally, a recursive plane clustering procedure is used to cluster the classified planes into the model objects. As the proposed method uses geometric context to detect an object, it is robust to the object's visual appearance change. As a result, it is ideal for detecting structural objects (e.g., stairways, doorways, etc.). In addition, it has high scalability and parallelism. The method is also capable of detecting some indoor non-structural objects. Experimental results demonstrate that the proposed method has a high success rate in object recognition.

  15. Non-destructive 3D shape measurement of transparent and black objects with thermal fringes

    NASA Astrophysics Data System (ADS)

    Brahm, Anika; Rößler, Conrad; Dietrich, Patrick; Heist, Stefan; Kühmstedt, Peter; Notni, Gunther

    2016-05-01

    Fringe projection is a well-established optical method for the non-destructive contactless three-dimensional (3D) measurement of object surfaces. Typically, fringe sequences in the visible wavelength range (VIS) are projected onto the surfaces of objects to be measured and are observed by two cameras in a stereo vision setup. The reconstruction is done by finding corresponding pixels in both cameras followed by triangulation. Problems can occur if the properties of some materials disturb the measurements. If the objects are transparent, translucent, reflective, or strongly absorbing in the VIS range, the projected patterns cannot be recorded properly. To overcome these challenges, we present a new alternative approach in the infrared (IR) region of the electromagnetic spectrum. For this purpose, two long-wavelength infrared (LWIR) cameras (7.5 - 13 μm) are used to detect the emitted heat radiation from surfaces which is induced by a pattern projection unit driven by a CO2 laser (10.6 μm). Thus, materials like glass or black objects, e.g. carbon fiber materials, can be measured non-destructively without the need of any additional paintings. We will demonstrate the basic principles of this heat pattern approach and show two types of 3D systems based on a freeform mirror and a GOBO wheel (GOes Before Optics) projector unit.

  16. Method for registering overlapping range images of arbitrarily shaped surfaces for 3D object reconstruction

    NASA Astrophysics Data System (ADS)

    Bittar, Eric; Lavallee, Stephane; Szeliski, Richard

    1993-08-01

    This paper presents a method to register overlapping 3-D surfaces which we use to reconstruct entire three-dimensional objects from sets of views. We use a range imaging sensor to digitize the object in several positions. Each pair of overlapping images is then registered using the algorithm developed in this paper. Rather than extracting and matching features, we match the complete surface, which we represent using a collection of points. This enables us to reconstruct smooth free-form objects which may lack sufficient features. Our algorithm is an extension of an algorithm we previously developed to register 3-D surfaces. This algorithm first creates an octree-spline from one set of points to quickly compute point to surface distances. It then uses an iterative nonlinear least squares minimization technique to minimize the sum of squared distances from the data point set to the octree point set. In this paper, we replace the squared distance with a function of the distance, which allows the elimination of points that are not in the shared region between the two sets. Once the object has been reconstructed by merging all the views, a continuous surface model is created from the set of points. This method has been successfully used on the limbs of a dummy and on a human head.

  17. Twin-beam real-time position estimation of micro-objects in 3D

    NASA Astrophysics Data System (ADS)

    Gurtner, Martin; Zemánek, Jiří

    2016-12-01

    Various optical methods for measuring positions of micro-objects in 3D have been reported in the literature. Nevertheless, the majority of them are not suitable for real-time operation, which is needed, for example, for feedback position control. In this paper, we present a method for real-time estimation of the position of micro-objects in 3D1; the method is based on twin-beam illumination and requires only a very simple hardware setup whose essential part is a standard image sensor without any lens. The performance of the proposed method is tested during a micro-manipulation task in which the estimated position served as feedback for the controller. The experiments show that the estimate is accurate to within  ∼3 μm in the lateral position and  ∼7 μm in the axial distance with the refresh rate of 10 Hz. Although the experiments are done using spherical objects, the presented method could be modified to handle non-spherical objects as well.

  18. Learning Category-Specific Deformable 3D Models for Object Reconstruction.

    PubMed

    Tulsiani, Shubham; Kar, Abhishek; Carreira, Joao; Malik, Jitendra

    2017-04-01

    We address the problem of fully automatic object localization and reconstruction from a single image. This is both a very challenging and very important problem which has, until recently, received limited attention due to difficulties in segmenting objects and predicting their poses. Here we leverage recent advances in learning convolutional networks for object detection and segmentation and introduce a complementary network for the task of camera viewpoint prediction. These predictors are very powerful, but still not perfect given the stringent requirements of shape reconstruction. Our main contribution is a new class of deformable 3D models that can be robustly fitted to images based on noisy pose and silhouette estimates computed upstream and that can be learned directly from 2D annotations available in object detection datasets. Our models capture top-down information about the main global modes of shape variation within a class providing a "low-frequency" shape. In order to capture fine instance-specific shape details, we fuse it with a high-frequency component recovered from shading cues. A comprehensive quantitative analysis and ablation study on the PASCAL 3D+ dataset validates the approach as we show fully automatic reconstructions on PASCAL VOC as well as large improvements on the task of viewpoint prediction.

  19. A Retrieval of Tropical Latent Heating Using the 3D Structure of Precipitation Features

    SciTech Connect

    Ahmed, Fiaz; Schumacher, Courtney; Feng, Zhe; Hagos, Samson

    2016-09-01

    Traditionally, radar-based latent heating retrievals use rainfall to estimate the total column-integrated latent heating and then distribute that heating in the vertical using a model-based look-up table (LUT). In this study, we develop a new method that uses size characteristics of radar-observed precipitating echo (i.e., area and mean echo-top height) to estimate the vertical structure of latent heating. This technique (named the Convective-Stratiform Area [CSA] algorithm) builds on the fact that the shape and magnitude of latent heating profiles are dependent on the organization of convective systems and aims to avoid some of the pitfalls involved in retrieving accurate rainfall amounts and microphysical information from radars and models. The CSA LUTs are based on a high-resolution Weather Research and Forecasting model (WRF) simulation whose domain spans much of the near-equatorial Indian Ocean. When applied to S-PolKa radar observations collected during the DYNAMO/CINDY2011/AMIE field campaign, the CSA retrieval compares well to heating profiles from a sounding-based budget analysis and improves upon a simple rain-based latent heating retrieval. The CSA LUTs also highlight the fact that convective latent heating increases in magnitude and height as cluster area and echo-top heights grow, with a notable congestus signature of cooling at mid levels. Stratiform latent heating is less dependent on echo-top height, but is strongly linked to area. Unrealistic latent heating profiles in the stratiform LUT, viz., a low-level heating spike, an elevated melting layer, and net column cooling were identified and corrected for. These issues highlight the need for improvement in model parameterizations, particularly in linking microphysical phase changes to larger mesoscale processes.

  20. Multiresolution analysis of 3D multimodal objects using a 2D quincunx wavelet analysis

    NASA Astrophysics Data System (ADS)

    Toubin, Marc F.; Dumont, Christophe; Truchetet, Frederic; Abidi, Mongi A.

    1999-08-01

    A reconstructed scene in virtual reality typically consists of millions of triangles.Data are heterogeneous and consist not only of geometric coordinates but also of multi-modal data. The latter requires more complex calculations and very high-speed graphics. Due to the large amount of data, displaying and analyzing these 3D models require new methods. This paper present an innovative method to analyze multi-model models using a 2D-quincunx wavelet analysis. The algorithm is composed of three processes. First, a set of range images is captured from various viewpoints surrounding the object of interest. In addition, a set of multi-modal images is acquired. Then, a multi-resolution analysis based on the quincunx wavelet transform is performed. The multi- resolution analysis allows extraction of multi-resolution detail areas. These areas of details are projected back onto the surface of the initial model. Detail areas are marked onto the model and constitute another modality. Finally, a mesh simplification is performed to reduce data that are not marked as detail. This approach can be applied to any 3D models containing multi-modal information in order to allow fast rendering and manipulation. This method also allows 3D data de-noising.

  1. Prototyping a Sensor Enabled 3d Citymodel on Geospatial Managed Objects

    NASA Astrophysics Data System (ADS)

    Kjems, E.; Kolář, J.

    2013-09-01

    One of the major development efforts within the GI Science domain are pointing at sensor based information and the usage of real time information coming from geographic referenced features in general. At the same time 3D City models are mostly justified as being objects for visualization purposes rather than constituting the foundation of a geographic data representation of the world. The combination of 3D city models and real time information based systems though can provide a whole new setup for data fusion within an urban environment and provide time critical information preserving our limited resources in the most sustainable way. Using 3D models with consistent object definitions give us the possibility to avoid troublesome abstractions of reality, and design even complex urban systems fusing information from various sources of data. These systems are difficult to design with the traditional software development approach based on major software packages and traditional data exchange. The data stream is varying from urban domain to urban domain and from system to system why it is almost impossible to design a complete system taking care of all thinkable instances now and in the future within one constraint software design complex. On several occasions we have been advocating for a new end advanced formulation of real world features using the concept of Geospatial Managed Objects (GMO). This paper presents the outcome of the InfraWorld project, a 4 million Euro project financed primarily by the Norwegian Research Council where the concept of GMO's have been applied in various situations on various running platforms of an urban system. The paper will be focusing on user experiences and interfaces rather then core technical and developmental issues. The project was primarily focusing on prototyping rather than realistic implementations although the results concerning applicability are quite clear.

  2. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  3. Combining laser scan and photogrammetry for 3D object modeling using a single digital camera

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Zhang, Hong; Zhang, Xiangwei

    2009-07-01

    In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Laser scan and photogrammetry are two main methods to be used. For laser scan, a video camera and a laser source are necessary, and for photogrammetry, a digital still camera with high resolution pixels is indispensable. In some 3D modeling tasks, two methods are often integrated to get satisfactory results. Although many research works have been done on how to combine the results of the two methods, no work has been reported to design an integrated device at low cost. In this paper, a new 3D scan system combining laser scan and photogrammetry using a single consumer digital camera is proposed. Nowadays there are many consumer digital cameras, such as Canon EOS 5D Mark II, they usually have features of more than 10M pixels still photo recording and full 1080p HD movie recording, so a integrated scan system can be designed using such a camera. A square plate glued with coded marks is used to place the 3d objects, and two straight wood rulers also glued with coded marks can be laid on the plate freely. In the photogrammetry module, the coded marks on the plate make up a world coordinate and can be used as control network to calibrate the camera, and the planes of two rulers can also be determined. The feature points of the object and the rough volume representation from the silhouettes can be obtained in this module. In the laser scan module, a hand-held line laser is used to scan the object, and the two straight rulers are used as reference planes to determine the position of the laser. The laser scan results in dense points cloud which can be aligned together automatically through calibrated camera parameters. The final complete digital model is obtained through a new a patchwise energy functional method by fusion of the feature points, rough volume and the dense points cloud. The design

  4. VIEWNET: a neural architecture for learning to recognize 3D objects from multiple 2D views

    NASA Astrophysics Data System (ADS)

    Grossberg, Stephen; Bradski, Gary

    1994-10-01

    A self-organizing neural network is developed for recognition of 3-D objects from sequences of their 2-D views. Called VIEWNET because it uses view information encoded with networks, the model processes 2-D views of 3-D objects using the CORT-X 2 filter, which discounts the illuminant, regularizes and completes figural boundaries, and removes noise from the images. A log-polar transform is taken with respect to the centroid of the resulting figure and then re-centered to achieve 2-D scale and rotation invariance. The invariant images are coarse coded to further reduce noise, reduce foreshortening effects, and increase generalization. These compressed codes are input into a supervised learning system based on the Fuzzy ARTMAP algorithm which learns 2-D view categories. Evidence from sequences of 2-D view categories is stored in a working memory. Voting based on the unordered set of stored categories determines object recognition. Recognition is studied with noisy and clean images using slow and fast learning. VIEWNET is demonstrated on an MIT Lincoln Laboratory database of 2-D views of aircraft with and without additive noise. A recognition rate of up to 90% is achieved with one 2-D view category and of up to 98.5% correct with three 2-D view categories.

  5. Stratification approach for 3-D euclidean reconstruction of nonrigid objects from uncalibrated image sequences.

    PubMed

    Wang, Guanghui; Wu, Q M Jonathan

    2008-02-01

    This paper addresses the problem of 3-D reconstruction of nonrigid objects from uncalibrated image sequences. Under the assumption of affine camera and that the nonrigid object is composed of a rigid part and a deformation part, we propose a stratification approach to recover the structure of nonrigid objects by first reconstructing the structure in affine space and then upgrading it to the Euclidean space. The novelty and main features of the method lies in several aspects. First, we propose a deformation weight constraint to the problem and prove the invariability between the recovered structure and shape bases under this constraint. The constraint was not observed by previous studies. Second, we propose a constrained power factorization algorithm to recover the deformation structure in affine space. The algorithm overcomes some limitations of a previous singular-value-decomposition-based method. It can even work with missing data in the tracking matrix. Third, we propose to separate the rigid features from the deformation ones in 3-D affine space, which makes the detection more accurate and robust. The stratification matrix is estimated from the rigid features, which may relax the influence of large tracking errors in the deformation part. Extensive experiments on synthetic data and real sequences validate the proposed method and show improvements over existing solutions.

  6. Measuring the 3D shape of high temperature objects using blue sinusoidal structured light

    NASA Astrophysics Data System (ADS)

    Zhao, Xianling; Liu, Jiansheng; Zhang, Huayu; Wu, Yingchun

    2015-12-01

    The visible light radiated by some high temperature objects (less than 1200 °C) almost lies in the red and infrared waves. It will interfere with structured light projected on a forging surface if phase measurement profilometry (PMP) is used to measure the shapes of objects. In order to obtain a clear deformed pattern image, a 3D measurement method based on blue sinusoidal structured light is proposed in this present work. Moreover, a method for filtering deformed pattern images is presented for correction of the unwrapping phase. Blue sinusoidal phase-shifting fringe pattern images are projected on the surface by a digital light processing (DLP) projector, and then the deformed patterns are captured by a 3-CCD camera. The deformed pattern images are separated into R, G and B color components by the software. The B color images filtered by a low-pass filter are used to calculate the fringe order. Consequently, the 3D shape of a high temperature object is obtained by the unwrapping phase and the calibration parameter matrixes of the DLP projector and 3-CCD camera. The experimental results show that the unwrapping phase is completely corrected with the filtering method by removing the high frequency noise from the first harmonic of the B color images. The measurement system can complete the measurement in a few seconds with a relative error of less than 1 : 1000.

  7. Perception of physical stability and center of mass of 3-D objects.

    PubMed

    Cholewiak, Steven A; Fleming, Roland W; Singh, Manish

    2015-02-10

    Humans can judge from vision alone whether an object is physically stable or not. Such judgments allow observers to predict the physical behavior of objects, and hence to guide their motor actions. We investigated the visual estimation of physical stability of 3-D objects (shown in stereoscopically viewed rendered scenes) and how it relates to visual estimates of their center of mass (COM). In Experiment 1, observers viewed an object near the edge of a table and adjusted its tilt to the perceived critical angle, i.e., the tilt angle at which the object was seen as equally likely to fall or return to its upright stable position. In Experiment 2, observers visually localized the COM of the same set of objects. In both experiments, observers' settings were compared to physical predictions based on the objects' geometry. In both tasks, deviations from physical predictions were, on average, relatively small. More detailed analyses of individual observers' settings in the two tasks, however, revealed mutual inconsistencies between observers' critical-angle and COM settings. The results suggest that observers did not use their COM estimates in a physically correct manner when making visual judgments of physical stability. © 2015 ARVO.

  8. Recognizing objects in 3D point clouds with multi-scale local features.

    PubMed

    Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie

    2014-12-15

    Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms.

  9. Object-adaptive depth compensated inter prediction for depth video coding in 3D video system

    NASA Astrophysics Data System (ADS)

    Kang, Min-Koo; Lee, Jaejoon; Lim, Ilsoon; Ho, Yo-Sung

    2011-01-01

    Nowadays, the 3D video system using the MVD (multi-view video plus depth) data format is being actively studied. The system has many advantages with respect to virtual view synthesis such as an auto-stereoscopic functionality, but compression of huge input data remains a problem. Therefore, efficient 3D data compression is extremely important in the system, and problems of low temporal consistency and viewpoint correlation should be resolved for efficient depth video coding. In this paper, we propose an object-adaptive depth compensated inter prediction method to resolve the problems where object-adaptive mean-depth difference between a current block, to be coded, and a reference block are compensated during inter prediction. In addition, unique properties of depth video are exploited to reduce side information required for signaling decoder to conduct the same process. To evaluate the coding performance, we have implemented the proposed method into MVC (multiview video coding) reference software, JMVC 8.2. Experimental results have demonstrated that our proposed method is especially efficient for depth videos estimated by DERS (depth estimation reference software) discussed in the MPEG 3DV coding group. The coding gain was up to 11.69% bit-saving, and it was even increased when we evaluated it on synthesized views of virtual viewpoints.

  10. 3-D Parallel, Object-Oriented, Hybrid, PIC Code for Ion Ring Studies

    NASA Astrophysics Data System (ADS)

    Omelchenko, Y. A.

    1997-08-01

    The 3-D hybrid, Particle-in-Cell (PIC) code, FLAME has been developed to study low-frequency, large orbit plasmas in realistic cylindrical configurations. FLAME assumes plasma quasineutrality and solves the Maxwell equations with displacement current neglected. The electron component is modeled as a massless fluid and all ion components are represented by discrete macro-particles. The poloidal discretization is done by a finite-difference staggered grid method. FFT is applied in the azimuthal direction. A substantial reduction of CPU time is achieved by enabling separate time advances of background and beam particle species in the time-averaged fields. The FLAME structure follows the guidelines of object-oriented programming. Its C++ class hierarchy comprises the Utility, Geometry, Particle, Grid and Distributed base class packages. The latter encapsulates implementation of concurrent grid and particle algorithms. The particle and grid data interprocessor communications are unified and designed to be independent of both the underlying message-passing library and the actual poloidal domain decomposition technique (FFT's are local). Load balancing concerns are addressed by using adaptive domain partitions to account for nonuniform spatial distributions of particle objects. The results of 2-D and 3-D FLAME simulations in support of the FIREX program at Cornell are presented.

  11. 3D Imaging with a Single-Aperture 3-mm Objective Lens: Concept, Fabrication and Test

    NASA Technical Reports Server (NTRS)

    Korniski, Ron; Bae, Sam Y.; Shearn, Mike; Manohara, Harish; Shahinian, Hrayr

    2011-01-01

    There are many advantages to minimally invasive surgery (MIS). An endoscope is the optical system of choice by the surgeon for MIS. The smaller the incision or opening made to perform the surgery, the smaller the optical system needed. For minimally invasive neurological and skull base surgeries the openings are typically 10-mm in diameter (dime sized) or less. The largest outside diameter (OD) endoscope used is 4mm. A significant drawback to endoscopic MIS is that it only provides a monocular view of the surgical site thereby lacking depth information for the surgeon. A stereo view would provide the surgeon instantaneous depth information of the surroundings within the field of view, a significant advantage especially during brain surgery. Providing 3D imaging in an endoscopic objective lens system presents significant challenges because of the tight packaging constraints. This paper presents a promising new technique for endoscopic 3D imaging that uses a single lens system with complementary multi-bandpass filters (CMBFs), and describes the proof-of-concept demonstrations performed to date validating the technique. These demonstrations of the technique have utilized many commercial off-the-shelf (COTS) components including the ones used in the endoscope objective.

  12. 3D Imaging with a Single-Aperture 3-mm Objective Lens: Concept, Fabrication and Test

    NASA Technical Reports Server (NTRS)

    Korniski, Ron; Bae, Sam Y.; Shearn, Mike; Manohara, Harish; Shahinian, Hrayr

    2011-01-01

    There are many advantages to minimally invasive surgery (MIS). An endoscope is the optical system of choice by the surgeon for MIS. The smaller the incision or opening made to perform the surgery, the smaller the optical system needed. For minimally invasive neurological and skull base surgeries the openings are typically 10-mm in diameter (dime sized) or less. The largest outside diameter (OD) endoscope used is 4mm. A significant drawback to endoscopic MIS is that it only provides a monocular view of the surgical site thereby lacking depth information for the surgeon. A stereo view would provide the surgeon instantaneous depth information of the surroundings within the field of view, a significant advantage especially during brain surgery. Providing 3D imaging in an endoscopic objective lens system presents significant challenges because of the tight packaging constraints. This paper presents a promising new technique for endoscopic 3D imaging that uses a single lens system with complementary multi-bandpass filters (CMBFs), and describes the proof-of-concept demonstrations performed to date validating the technique. These demonstrations of the technique have utilized many commercial off-the-shelf (COTS) components including the ones used in the endoscope objective.

  13. 3D imaging with a single-aperture 3-mm objective lens: concept, fabrication, and test

    NASA Astrophysics Data System (ADS)

    Korniski, Ronald; Bae, Sam Y.; Shearn, Michael; Manohara, Harish; Shahinian, Hrayr

    2011-10-01

    There are many advantages to minimally invasive surgery (MIS). An endoscope is the optical system of choice by the surgeon for MIS. The smaller the incision or opening made to perform the surgery, the smaller the optical system needed. For minimally invasive neurological and skull base surgeries the openings are typically 10-mm in diameter (dime sized) or less. The largest outside diameter (OD) endoscope used is 4mm. A significant drawback to endoscopic MIS is that it only provides a monocular view of the surgical site thereby lacking depth information for the surgeon. A stereo view would provide the surgeon instantaneous depth information of the surroundings within the field of view, a significant advantage especially during brain surgery. Providing 3D imaging in an endoscopic objective lens system presents significant challenges because of the tight packaging constraints. This paper presents a promising new technique for endoscopic 3D imaging that uses a single lens system with complementary multi-bandpass filters (CMBFs), and describes the proof-of-concept demonstrations performed to date validating the technique. These demonstrations of the technique have utilized many commercial off-the- shelf (COTS) components including the ones used in the endoscope objective.

  14. Recognizing Objects in 3D Point Clouds with Multi-Scale Local Features

    PubMed Central

    Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie

    2014-01-01

    Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms. PMID:25517694

  15. Tilting and moving-object lens for a 3D electron microscope.

    PubMed

    Ura, Katsumi

    2016-10-01

    I investigated the tilting and movement of the objective lens of a 3D electron microscope electrically as an extension of the moving-objective lens concept. The electric or magnetic potential along the tilted optical axis is analytically expressed by a multipole potential expansion about the fixed central axis. The field distributions for axially symmetric dipole and quadrupole components are numerically shown, where the optical axis of a bell-shaped magnetic lens is tilted around the lens center by up to 60°. The hexapole and octapole components are also shown at a tilt angle of 45°. © The Author 2016. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Implicit Shape Models for Object Detection in 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Velizhev, A.; Shapovalov, R.; Schindler, K.

    2012-07-01

    We present a method for automatic object localization and recognition in 3D point clouds representing outdoor urban scenes. The method is based on the implicit shape models (ISM) framework, which recognizes objects by voting for their center locations. It requires only few training examples per class, which is an important property for practical use. We also introduce and evaluate an improved version of the spin image descriptor, more robust to point density variation and uncertainty in normal direction estimation. Our experiments reveal a significant impact of these modifications on the recognition performance. We compare our results against the state-of-the-art method and get significant improvement in both precision and recall on the Ohio dataset, consisting of combined aerial and terrestrial LiDAR scans of 150,000 m2 of urban area in total.

  17. An optimal sensing strategy for recognition and localization of 3-D natural quadric objects

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan; Hahn, Hernsoo

    1991-01-01

    An optimal sensing strategy for an optical proximity sensor system engaged in the recognition and localization of 3-D natural quadric objects is presented. The optimal sensing strategy consists of the selection of an optimal beam orientation and the determination of an optimal probing plane that compose an optimal data collection operation known as an optimal probing. The decision of an optimal probing is based on the measure of discrimination power of a cluster of surfaces on a multiple interpretation image (MII), where the measure of discrimination power is defined in terms of a utility function computing the expected number of interpretations that can be pruned out by a probing. An object representation suitable for active sensing based on a surface description vector (SDV) distribution graph and hierarchical tables is presented. Experimental results are shown.

  18. 3D shape reconstruction of loop objects in X-ray protein crystallography.

    PubMed

    Strutz, Tilo

    2011-01-01

    Knowledge of the shape of crystals can benefit data collection in X-ray crystallography. A preliminary step is the determination of the loop object, i.e., the shape of the loop holding the crystal. Based on the standard set-up of experimental X-ray stations for protein crystallography, the paper reviews a reconstruction method merely requiring 2D object contours and presents a dedicated novel algorithm. Properties of the object surface (e.g., texture) and depth information do not have to be considered. The complexity of the reconstruction task is significantly reduced by slicing the 3D object into parallel 2D cross-sections. The shape of each cross-section is determined using support lines forming polygons. The slicing technique allows the reconstruction of concave surfaces perpendicular to the direction of projection. In spite of the low computational complexity, the reconstruction method is resilient to noisy object projections caused by imperfections in the image-processing system extracting the contours. The algorithm developed here has been successfully applied to the reconstruction of shapes of loop objects in X-ray crystallography.

  19. ROOT OO model to render multi-level 3-D geometrical objects via an OpenGL

    NASA Astrophysics Data System (ADS)

    Brun, Rene; Fine, Valeri; Rademakers, Fons

    2001-08-01

    This paper presents a set of C++ low-level classes to render 3D objects within ROOT-based frameworks. This allows developing a set of viewers with different properties the user can choose from to render one and the same 3D objects.

  20. Knowledge guided object detection and identification in 3D point clouds

    NASA Astrophysics Data System (ADS)

    Karmacharya, A.; Boochs, F.; Tietz, B.

    2015-05-01

    Modern instruments like laser scanner and 3D cameras or image based techniques like structure from motion produce huge point clouds as base for further object analysis. This has considerably changed the way of data compilation away from selective manually guided processes towards automatic and computer supported strategies. However it's still a long way to achieve the quality and robustness of manual processes as data sets are mostly very complex. Looking at existing strategies 3D data processing for object detections and reconstruction rely heavily on either data driven or model driven approaches. These approaches come with their limitation on depending highly on the nature of data and inability to handle any deviation. Furthermore, the lack of capabilities to integrate other data or information in between the processing steps further exposes their limitations. This restricts the approaches to be executed with strict predefined strategy and does not allow deviations when and if new unexpected situations arise. We propose a solution that induces intelligence in the processing activities through the usage of semantics. The solution binds the objects along with other related knowledge domains to the numerical processing to facilitate the detection of geometries and then uses experts' inference rules to annotate them. The solution was tested within the prototypical application of the research project "Wissensbasierte Detektion von Objekten in Punktwolken für Anwendungen im Ingenieurbereich (WiDOP)". The flexibility of the solution is demonstrated through two entirely different USE Case scenarios: Deutsche Bahn (German Railway System) for the outdoor scenarios and Fraport (Frankfort Airport) for the indoor scenarios. Apart from the difference in their environments, they provide different conditions, which the solution needs to consider. While locations of the objects in Fraport were previously known, that of DB were not known at the beginning.

  1. Active learning in the lecture theatre using 3D printed objects

    PubMed Central

    Smith, David P.

    2016-01-01

    The ability to conceptualize 3D shapes is central to understanding biological processes. The concept that the structure of a biological molecule leads to function is a core principle of the biochemical field. Visualisation of biological molecules often involves vocal explanations or the use of two dimensional slides and video presentations. A deeper understanding of these molecules can however be obtained by the handling of objects. 3D printed biological molecules can be used as active learning tools to stimulate engagement in large group lectures. These models can be used to build upon initial core knowledge which can be delivered in either a flipped form or a more didactic manner. Within the teaching session the students are able to learn by handling, rotating and viewing the objects to gain an appreciation, for example, of an enzyme’s active site or the difference between the major and minor groove of DNA. Models and other artefacts can be handled in small groups within a lecture theatre and act as a focal point to generate conversation. Through the approach presented here core knowledge is first established and then supplemented with high level problem solving through a "Think-Pair-Share" cooperative learning strategy. The teaching delivery was adjusted based around experiential learning activities by moving the object from mental cognition and into the physical environment. This approach led to students being able to better visualise biological molecules and a positive engagement in the lecture. The use of objects in teaching allows the lecturer to create interactive sessions that both challenge and enable the student. PMID:27366318

  2. Active learning in the lecture theatre using 3D printed objects.

    PubMed

    Smith, David P

    2016-01-01

    The ability to conceptualize 3D shapes is central to understanding biological processes. The concept that the structure of a biological molecule leads to function is a core principle of the biochemical field. Visualisation of biological molecules often involves vocal explanations or the use of two dimensional slides and video presentations. A deeper understanding of these molecules can however be obtained by the handling of objects. 3D printed biological molecules can be used as active learning tools to stimulate engagement in large group lectures. These models can be used to build upon initial core knowledge which can be delivered in either a flipped form or a more didactic manner. Within the teaching session the students are able to learn by handling, rotating and viewing the objects to gain an appreciation, for example, of an enzyme's active site or the difference between the major and minor groove of DNA. Models and other artefacts can be handled in small groups within a lecture theatre and act as a focal point to generate conversation. Through the approach presented here core knowledge is first established and then supplemented with high level problem solving through a "Think-Pair-Share" cooperative learning strategy. The teaching delivery was adjusted based around experiential learning activities by moving the object from mental cognition and into the physical environment. This approach led to students being able to better visualise biological molecules and a positive engagement in the lecture. The use of objects in teaching allows the lecturer to create interactive sessions that both challenge and enable the student.

  3. Laser Scanning for 3D Object Characterization: Infrastructure for Exploration and Analysis of Vegetation Signatures

    NASA Astrophysics Data System (ADS)

    Koenig, K.; Höfle, B.

    2012-04-01

    Mapping and characterization of the three-dimensional nature of vegetation is increasingly gaining in importance. Deeper insight is required for e.g. forest management, biodiversity assessment, habitat analysis, precision agriculture, renewable energy production or the analysis of interaction between biosphere and atmosphere. However the potential of 3D vegetation characterization has not been exploited so far and new technologies are needed. Laser scanning has evolved into the state-of-the-art technology for highly accurate 3D data acquisition. By now several studies indicated a high value of 3D vegetation description by using laser data. The laser sensors provide a detailed geometric presentation (geometric information) of scanned objects as well as a full profile of laser energy that was scattered back to the sensor (radiometric information). In order to exploit the full potential of these datasets, profound knowledge on laser scanning technology for data acquisition, geoinformation technology for data analysis and object of interest (e.g. vegetation) for data interpretation have to be joined. A signature database is a collection of signatures of reference vegetation objects acquired under known conditions and sensor parameters and can be used to improve information extraction from unclassified vegetation datasets. Different vegetation elements (leaves, branches, etc.) at different heights above ground with different geometric composition contribute to the overall description (i.e. signature) of the scanned object. The developed tools allow analyzing tree objects according to single features (e.g. echo width and signal amplitude) and to any relation of features and derived statistical values (e.g. ratio of laser point attributes). For example, a single backscatter cross section value does not allow for tree species determination, whereas the average echo width per tree segment can give good estimates. Statistical values and/or distributions (e.g. Gaussian

  4. Correlative Nanoscale 3D Imaging of Structure and Composition in Extended Objects

    PubMed Central

    Xu, Feng; Helfen, Lukas; Suhonen, Heikki; Elgrabli, Dan; Bayat, Sam; Reischig, Péter; Baumbach, Tilo; Cloetens, Peter

    2012-01-01

    Structure and composition at the nanoscale determine the behavior of biological systems and engineered materials. The drive to understand and control this behavior has placed strong demands on developing methods for high resolution imaging. In general, the improvement of three-dimensional (3D) resolution is accomplished by tightening constraints: reduced manageable specimen sizes, decreasing analyzable volumes, degrading contrasts, and increasing sample preparation efforts. Aiming to overcome these limitations, we present a non-destructive and multiple-contrast imaging technique, using principles of X-ray laminography, thus generalizing tomography towards laterally extended objects. We retain advantages that are usually restricted to 2D microscopic imaging, such as scanning of large areas and subsequent zooming-in towards a region of interest at the highest possible resolution. Our technique permits correlating the 3D structure and the elemental distribution yielding a high sensitivity to variations of the electron density via coherent imaging and to local trace element quantification through X-ray fluorescence. We demonstrate the method by imaging a lithographic nanostructure and an aluminum alloy. Analyzing a biological system, we visualize in lung tissue the subcellular response to toxic stress after exposure to nanotubes. We show that most of the nanotubes are trapped inside alveolar macrophages, while a small portion of the nanotubes has crossed the barrier to the cellular space of the alveolar wall. In general, our method is non-destructive and can be combined with different sample environmental or loading conditions. We therefore anticipate that correlative X-ray nano-laminography will enable a variety of in situ and in operando 3D studies. PMID:23185554

  5. OVERALL PROCEDURES PROTOCOL AND PATIENT ENROLLMENT PROTOCOL: TESTING FEASIBILITY OF 3D ULTRASOUND DATA ACQUISITION AND RELIABILITY OF DATA RETRIEVAL FROM STORED 3D IMAGES

    EPA Science Inventory

    The purpose of this study is to examine the feasibility of collecting, transmitting,

    and analyzing 3-D ultrasound data in the context of a multi-center study of pregnant

    women. The study will also examine the reliability of measurements obtained from 3-D

    imag...

  6. A 3D Interactive Multi-object Segmentation Tool using Local Robust Statistics Driven Active Contours

    PubMed Central

    Gao, Yi; Kikinis, Ron; Bouix, Sylvain; Shenton, Martha; Tannenbaum, Allen

    2012-01-01

    Extracting anatomical and functional significant structures renders one of the important tasks for both the theoretical study of the medical image analysis, and the clinical and practical community. In the past, much work has been dedicated only to the algorithmic development. Nevertheless, for clinical end users, a well designed algorithm with an interactive software is necessary for an algorithm to be utilized in their daily work. Furthermore, the software would better be open sourced in order to be used and validated by not only the authors but also the entire community. Therefore, the contribution of the present work is twofolds: First, we propose a new robust statistics based conformal metric and the conformal area driven multiple active contour framework, to simultaneously extract multiple targets from MR and CT medical imagery in 3D. Second, an open source graphically interactive 3D segmentation tool based on the aforementioned contour evolution is implemented and is publicly available for end users on multiple platforms. In using this software for the segmentation task, the process is initiated by the user drawn strokes (seeds) in the target region in the image. Then, the local robust statistics are used to describe the object features, and such features are learned adaptively from the seeds under a non-parametric estimation scheme. Subsequently, several active contours evolve simultaneously with their interactions being motivated by the principles of action and reaction — This not only guarantees mutual exclusiveness among the contours, but also no longer relies upon the assumption that the multiple objects fill the entire image domain, which was tacitly or explicitly assumed in many previous works. In doing so, the contours interact and converge to equilibrium at the desired positions of the desired multiple objects. Furthermore, with the aim of not only validating the algorithm and the software, but also demonstrating how the tool is to be used, we

  7. Efficient data exchange: Integrating a vector GIS with an object-oriented, 3-D visualization system

    SciTech Connect

    Kuiper, J.; Ayers, A.; Johnson, R.; Tolbert-Smith, M.

    1996-03-01

    A common problem encountered in Geographic Information System (GIS) modeling is the exchange of data between different software packages to best utilize the unique features of each package. This paper describes a project to integrate two systems through efficient data exchange. The first is a widely used GIS based on a relational data model. This system has a broad set of data input, processing, and output capabilities, but lacks three-dimensional (3-D) visualization and certain modeling functions. The second system is a specialized object-oriented package designed for 3-D visualization and modeling. Although this second system is useful for subsurface modeling and hazardous waste site characterization, it does not provide many of the, capabilities of a complete GIS. The system-integration project resulted in an easy-to-use program to transfer information between the systems, making many of the more complex conversion issues transparent to the user. The strengths of both systems are accessible, allowing the scientist more time to focus on analysis. This paper details the capabilities of the two systems, explains the technical issues associated with data exchange and how they were solved, and outlines an example analysis project that used the integrated systems.

  8. Object recognition and localization from 3D point clouds by maximum-likelihood estimation.

    PubMed

    Dantanarayana, Harshana G; Huntley, Jonathan M

    2017-08-01

    We present an algorithm based on maximum-likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike 'interest point'-based algorithms which normally discard such data. Compared to the 6D Hough transform, it has negligible memory requirements, and is computationally efficient compared to iterative closest point algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degrees of freedom (d.f.) example is given, followed by a full 6 d.f. analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an RMS alignment error as low as 0.3 mm.

  9. Enhancing training performance for brain-computer interface with object-directed 3D visual guidance.

    PubMed

    Liang, Shuang; Choi, Kup-Sze; Qin, Jing; Pang, Wai-Man; Heng, Pheng-Ann

    2016-11-01

    The accuracy of the classification of user intentions is essential for motor imagery (MI)-based brain-computer interface (BCI). Effective and appropriate training for users could help us produce the high reliability of mind decision making related with MI tasks. In this study, we aimed to investigate the effects of visual guidance on the classification performance of MI-based BCI. In this study, leveraging both the single-subject and the multi-subject BCI paradigms, we train and classify MI tasks with three different scenarios in a 3D virtual environment, including non-object-directed scenario, static-object-directed scenario, and dynamic object-directed scenario. Subjects are required to imagine left-hand or right-hand movement with the visual guidance. We demonstrate that the classification performances of left-hand and right-hand MI task have differences on these three scenarios, and confirm that both static-object-directed and dynamic object-directed scenarios could provide better classification accuracy than the non-object-directed case. We further indicate that both static-object-directed and dynamic object-directed scenarios could shorten the response time as well as be suitable applied in the case of small training data. In addition, experiment results demonstrate that the multi-subject BCI paradigm could improve the classification performance comparing with the single-subject paradigm. These results suggest that it is possible to improve the classification performance with the appropriate visual guidance and better BCI paradigm. We believe that our findings would have the potential for improving classification performance of MI-based BCI and being applied in the practical applications.

  10. Characteristics of eye movements in 3-D object learning: comparison between within-modal and cross-modal object recognition.

    PubMed

    Ueda, Yoshiyuki; Saiki, Jun

    2012-01-01

    Recent studies have indicated that the object representation acquired during visual learning depends on the encoding modality during the test phase. However, the nature of the differences between within-modal learning (eg visual learning-visual recognition) and cross-modal learning (eg visual learning-haptic recognition) remains unknown. To address this issue, we utilised eye movement data and investigated object learning strategies during the learning phase of a cross-modal object recognition experiment. Observers informed of the test modality studied an unfamiliar visually presented 3-D object. Quantitative analyses showed that recognition performance was consistent regardless of rotation in the cross-modal condition, but was reduced when objects were rotated in the within-modal condition. In addition, eye movements during learning significantly differed between within-modal and cross-modal learning. Fixations were more diffused for cross-modal learning than in within-modal learning. Moreover, over the course of the trial, fixation durations became longer in cross-modal learning than in within-modal learning. These results suggest that the object learning strategies employed during the learning phase differ according to the modality of the test phase, and that this difference leads to different recognition performances.

  11. Performance of a neural-network-based 3-D object recognition system

    NASA Astrophysics Data System (ADS)

    Rak, Steven J.; Kolodzy, Paul J.

    1991-08-01

    Object recognition in laser radar sensor imagery is a challenging application of neural networks. The task involves recognition of objects at a variety of distances and aspects with significant levels of sensor noise. These variables are related to sensor parameters such as sensor signal strength and angular resolution, as well as object range and viewing aspect. The effect of these parameters on a fixed recognition system based on log-polar mapped features and an unsupervised neural network classifier are investigated. This work is an attempt to quantify the design parameters of a laser radar measurement system with respect to classifying and/or identifying objects by the shape of their silhouettes. Experiments with vehicle silhouettes rotated through 90 deg-of-view angle from broadside to head-on ('out-of-plane' rotation) have been used to quantify the performance of a log-polar map/neural-network based 3-D object recognition system. These experiments investigated several key issues such as category stability, category memory compression, image fidelity, and viewing aspect. Initial results indicate a compression from 720 possible categories (8 vehicles X 90 out-of-plane rotations) to a classifier memory with approximately 30 stable recognition categories. These results parallel the human experience of studying an object from several viewing angles yet recognizing it through a wide range of viewing angles. Results are presented illustrating category formation for an eight vehicle dataset as a function of several sensor parameters. These include: (1) sensor noise, as a function of carrier-to-noise ratio; (2) pixels on the vehicle, related to angular resolution and target range; and (3) viewing aspect, as related to sensor-to-platform depression angle. This work contributes to the formation of a three- dimensional object recognition system.

  12. Distance Between Sets as an Objective Measure of Retrieval Effectiveness

    ERIC Educational Resources Information Center

    Heine, M. H.

    1973-01-01

    The Marczewski-Steinhaus metric provides what appears to be an objective general measure of retrieval effectiveness within the framework of set theory and the theory of metric spaces. (19 references) (Author/SJ)

  13. Object-Centered Knowledge Representation and Information Retrieval.

    ERIC Educational Resources Information Center

    Panyr, Jiri

    1996-01-01

    Discusses object-centered knowledge representation and information retrieval. Highlights include semantic networks; frames; predicative (declarative) and associative knowledge; cluster analysis; creation of subconcepts and superconcepts; automatic classification; hierarchies and pseudohierarchies; graph theory; term classification; clustering of…

  14. Performance analysis of different surface reconstruction algorithms for 3D reconstruction of outdoor objects from their digital images.

    PubMed

    Maiti, Abhik; Chakravarty, Debashish

    2016-01-01

    3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality.

  15. Laser-assisted direct manufacturing of functionally graded 3D objects

    NASA Astrophysics Data System (ADS)

    Iakovlev, A.; Trunova, E.; Grevey, Dominique; Smurov, Igor

    2003-09-01

    Coaxial powder injection into a laser beam was applied for the laser-assisted direct manufacturing of 3D functionally graded (FG) objects. The powders of Stainless Steel 316L and Stellite grade 12 were applied. The following laser sources were used: (1) quasi-cw CO2 Rofin Sinar laser with 120 μm focal spot diameter and (2) pulsed-periodic Nd:YAG (HAAS HL 304P) with 200 μm focal spot diameter. The objects were fabricated layer-by-layer in the form of "walls", having the thickness of about 200 μm for CO2 laser and 300 μm for Nd:YAG laser. SEM analysis was applied for the FG objects fabricated by CO2 laser, yielding wall elements distribution in vertical direction. It was found that microhardness distribution is fully correlated with the components distribution. The compositional gradient can be smooth or sharp. Periodic multi-layered structures can be obtained as well. Minimal thickness of a layer with the fixed composition (for cw CO2 laser) is about 50 μm. Minimal thickness of a graded material zone, i.e. zone with composition variation from pure stainless steel to pure stellite is about 30 μm.

  16. Serial packing of arbitrary 3D objects for optimizing layered manufacturing

    NASA Astrophysics Data System (ADS)

    Dickinson, John K.; Knopf, George K.

    1998-10-01

    Parallel approaches for packing arbitrary 3D objects into fixed volumes are characterized by rearranging all of the parts simultaneously and evaluating the results. The practical application of each proposed approach to real world problems has been hindered by the computational time required to find a solution or over simplifications made to reduce the time required. A serial approach is proposed in this paper that reduces the complexity of the problem domain by packing each object one a time as `best as possible', thus more closely emulating the way a human might arrange items in the trunk of a car. This technique has enabled the implementation of an efficient packing algorithm that is not limited by working with the object's bounding boxes and or by a restricted set of permissible orientations. Preliminary tests demonstrate that the technique reduces computational times, on average, by a factor of 19 or more compared to an existing technique. Furthermore, the new approach is guaranteed to produce a viable packing arrangement for a subset of the parts even if every part cannot possibly be accommodated in the available volume, a typical situation found in rapid prototyping service bureaus. The same cannot be said for existing parallel packing algorithm implementations.

  17. Extraction and classification of 3D objects from volumetric CT data

    NASA Astrophysics Data System (ADS)

    Song, Samuel M.; Kwon, Junghyun; Ely, Austin; Enyeart, John; Johnson, Chad; Lee, Jongkyu; Kim, Namho; Boyd, Douglas P.

    2016-05-01

    We propose an Automatic Threat Detection (ATD) algorithm for Explosive Detection System (EDS) using our multistage Segmentation Carving (SC) followed by Support Vector Machine (SVM) classifier. The multi-stage Segmentation and Carving (SC) step extracts all suspect 3-D objects. The feature vector is then constructed for all extracted objects and the feature vector is classified by the Support Vector Machine (SVM) previously learned using a set of ground truth threat and benign objects. The learned SVM classifier has shown to be effective in classification of different types of threat materials. The proposed ATD algorithm robustly deals with CT data that are prone to artifacts due to scatter, beam hardening as well as other systematic idiosyncrasies of the CT data. Furthermore, the proposed ATD algorithm is amenable for including newly emerging threat materials as well as for accommodating data from newly developing sensor technologies. Efficacy of the proposed ATD algorithm with the SVM classifier is demonstrated by the Receiver Operating Characteristics (ROC) curve that relates Probability of Detection (PD) as a function of Probability of False Alarm (PFA). The tests performed using CT data of passenger bags shows excellent performance characteristics.

  18. An Effective 3D Shape Descriptor for Object Recognition with RGB-D Sensors

    PubMed Central

    Liu, Zhong; Zhao, Changchen; Wu, Xingming; Chen, Weihai

    2017-01-01

    RGB-D sensors have been widely used in various areas of computer vision and graphics. A good descriptor will effectively improve the performance of operation. This article further analyzes the recognition performance of shape features extracted from multi-modality source data using RGB-D sensors. A hybrid shape descriptor is proposed as a representation of objects for recognition. We first extracted five 2D shape features from contour-based images and five 3D shape features over point cloud data to capture the global and local shape characteristics of an object. The recognition performance was tested for category recognition and instance recognition. Experimental results show that the proposed shape descriptor outperforms several common global-to-global shape descriptors and is comparable to some partial-to-global shape descriptors that achieved the best accuracies in category and instance recognition. Contribution of partial features and computational complexity were also analyzed. The results indicate that the proposed shape features are strong cues for object recognition and can be combined with other features to boost accuracy. PMID:28245553

  19. Polarization imaging of a 3D object by use of on-axis phase-shifting digital holography.

    PubMed

    Nomura, Takanori; Javidi, Bahram; Murata, Shinji; Nitanai, Eiji; Numata, Takuhisa

    2007-03-01

    A polarimetric imaging method of a 3D object by use of on-axis phase-shifting digital holography is presented. The polarimetric image results from a combination of two kinds of holographic imaging using orthogonal polarized reference waves. Experimental demonstration of a 3D polarimetric imaging is presented.

  20. 3D printing cybersecurity: detecting and preventing attacks that seek to weaken a printed object by changing fill level

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2017-06-01

    Prior work by Zeltmann, et al. has demonstrated the impact of small defects and other irregularities on the structural integrity of 3D printed objects. It posited that such defects could be introduced intentionally. The current work looks at the impact of changing the fill level on object structural integrity. It considers whether the existence of an appropriate level of fill can be determined through visible light imagery-based assessment of a 3D printed object. A technique for assessing the quality and sufficiency of quantity of 3D printed fill material is presented. It is assessed experimentally and results are presented and analyzed.

  1. The role of the foreshortening cue in the perception of 3D object slant.

    PubMed

    Ivanov, Iliya V; Kramer, Daniel J; Mullen, Kathy T

    2014-01-01

    Slant is the degree to which a surface recedes or slopes away from the observer about the horizontal axis. The perception of surface slant may be derived from static monocular cues, including linear perspective and foreshortening, applied to single shapes or to multi-element textures. It is still unclear the extent to which color vision can use these cues to determine slant in the absence of achromatic contrast. Although previous demonstrations have shown that some pictures and images may lose their depth when presented at isoluminance, this has not been tested systematically using stimuli within the spatio-temporal passband of color vision. Here we test whether the foreshortening cue from surface compression (change in the ratio of width to length) can induce slant perception for single shapes for both color and luminance vision. We use radial frequency patterns with narrowband spatio-temporal properties. In the first experiment, both a manual task (lever rotation) and a visual task (line rotation) are used as metrics to measure the perception of slant for achromatic, red-green isoluminant and S-cone isolating stimuli. In the second experiment, we measure slant discrimination thresholds as a function of depicted slant in a 2AFC paradigm and find similar thresholds for chromatic and achromatic stimuli. We conclude that both color and luminance vision can use the foreshortening of a single surface to perceive slant, with performances similar to those obtained using other strong cues for slant, such as texture. This has implications for the role of color in monocular 3D vision, and the cortical organization used in 3D object perception. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. A modern approach to storing of 3D geometry of objects in machine engineering industry

    NASA Astrophysics Data System (ADS)

    Sokolova, E. A.; Aslanov, G. A.; Sokolov, A. A.

    2017-02-01

    3D graphics is a kind of computer graphics which has absorbed a lot from the vector and raster computer graphics. It is used in interior design projects, architectural projects, advertising, while creating educational computer programs, movies, visual images of parts and products in engineering, etc. 3D computer graphics allows one to create 3D scenes along with simulation of light conditions and setting up standpoints.

  3. Vectorial seismic modeling for 3D objects by the classical solution

    NASA Astrophysics Data System (ADS)

    Ávila-Carrera, R.; Sánchez-Sesma, F. J.; Rodríguez-Castellanos, A.; Ortiz-Alemán, C.

    2010-09-01

    The analytic benchmark solution for the scattering and diffraction of elastic P- and S-waves by a single spherical obstacle is presented in a condensed format. Our aim is divulge to the scientific community this not widely known classical solution to construct a direct seismic model for 3D objects. Some of the benchmark papers are frequently plagued by misprints and none offers results on the transient response. The treatment of the vectorial case appears to be insipient in the literature. The classical solution is a superposition of incident and diffracted fields. Plane P- or S-waves are assumed. They are expressed as expansions of spherical wave functions which are tested against exact results. The diffracted field by the obstacle is calculated from the analytical enforcing of boundary conditions at the scatterer-matrix interface. The spherical obstacle is a cavity, an elastic inclusion or a fluid-filled body. A complete set of wave functions is used in terms of Bessel and Hankel radial functions. Legendre and trigonometric functions are used for the angular coordinates. In order to provide information to calibrate and approximate the seismic modeling for real objects, results are shown in time and frequency domains. Diffracted displacements amplitudes versus normalized frequency and radiation patterns for various scatterer-matrix properties are reported. To study propagation features that may be useful to geophysicists and engineers, synthetic seismograms for some relevant cases are computed.

  4. Using video objects and relevance feedback in video retrieval

    NASA Astrophysics Data System (ADS)

    Sav, Sorin; Lee, Hyowon; Smeaton, Alan F.; O'Connor, Noel E.; Murphy, Noel

    2005-10-01

    Video retrieval is mostly based on using text from dialogue and this remains the most significant component, despite progress in other aspects. One problem with this is when a searcher wants to locate video based on what is appearing in the video rather than what is being spoken about. Alternatives such as automatically-detected features and image-based keyframe matching can be used, though these still need further improvement in quality. One other modality for video retrieval is based on segmenting objects from video and allowing endusers to use these as part of querying. This uses similarity between query objects and objects from video, and in theory allows retrieval based on what is actually appearing on-screen. The main hurdles to greater use of this are the overhead of object segmentation on large amounts of video and the issue of whether we can actually achieve effective object-based retrieval. We describe a system to support object-based video retrieval where a user selects example video objects as part of the query. During a search a user builds up a set of these which are matched against objects previously segmented from a video library. This match is based on MPEG-7 Dominant Colour, Shape Compaction and Texture Browsing descriptors. We use a user-driven semi-automated segmentation process to segment the video archive which is very accurate and is faster than conventional video annotation.

  5. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    PubMed

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees' flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots.

  6. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    PubMed Central

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees’ flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots. PMID:26886006

  7. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  8. Tomographic active optical trapping of arbitrarily shaped objects by exploiting 3D refractive index maps

    NASA Astrophysics Data System (ADS)

    Kim, Kyoohyun; Park, Yongkeun

    2017-05-01

    Optical trapping can manipulate the three-dimensional (3D) motion of spherical particles based on the simple prediction of optical forces and the responding motion of samples. However, controlling the 3D behaviour of non-spherical particles with arbitrary orientations is extremely challenging, due to experimental difficulties and extensive computations. Here, we achieve the real-time optical control of arbitrarily shaped particles by combining the wavefront shaping of a trapping beam and measurements of the 3D refractive index distribution of samples. Engineering the 3D light field distribution of a trapping beam based on the measured 3D refractive index map of samples generates a light mould, which can manipulate colloidal and biological samples with arbitrary orientations and/or shapes. The present method provides stable control of the orientation and assembly of arbitrarily shaped particles without knowing a priori information about the sample geometry. The proposed method can be directly applied in biophotonics and soft matter physics.

  9. Software for Building Models of 3D Objects via the Internet

    NASA Technical Reports Server (NTRS)

    Schramer, Tim; Jensen, Jeff

    2003-01-01

    The Virtual EDF Builder (where EDF signifies Electronic Development Fixture) is a computer program that facilitates the use of the Internet for building and displaying digital models of three-dimensional (3D) objects that ordinarily comprise assemblies of solid models created previously by use of computer-aided-design (CAD) programs. The Virtual EDF Builder resides on a Unix-based server computer. It is used in conjunction with a commercially available Web-based plug-in viewer program that runs on a client computer. The Virtual EDF Builder acts as a translator between the viewer program and a database stored on the server. The translation function includes the provision of uniform resource locator (URL) links to other Web-based computer systems and databases. The Virtual EDF builder can be used in two ways: (1) If the client computer is Unix-based, then it can assemble a model locally; the computational load is transferred from the server to the client computer. (2) Alternatively, the server can be made to build the model, in which case the server bears the computational load and the results are downloaded to the client computer or workstation upon completion.

  10. Tracking initially unresolved thrusting objects in 3D using a single stationary optical sensor

    NASA Astrophysics Data System (ADS)

    Lu, Qin; Bar-Shalom, Yaakov; Willett, Peter; Granström, Karl; Ben-Dov, R.; Milgrom, B.

    2017-05-01

    This paper considers the problem of estimating the 3D states of a salvo of thrusting/ballistic endo-atmospheric objects using 2D Cartesian measurements from the focal plane array (FPA) of a single fixed optical sensor. Since the initial separations in the FPA are smaller than the resolution of the sensor, this results in merged measurements in the FPA, compounding the usual false-alarm and missed-detection uncertainty. We present a two-step methodology. First, we assume a Wiener process acceleration (WPA) model for the motion of the images of the projectiles in the optical sensor's FPA. We model the merged measurements with increased variance, and thence employ a multi-Bernoulli (MB) filter using the 2D measurements in the FPA. Second, using the set of associated measurements for each confirmed MB track, we formulate a parameter estimation problem, whose maximum likelihood estimate can be obtained via numerical search and can be used for impact point prediction. Simulation results illustrate the performance of the proposed method.

  11. 3D models automatic reconstruction of selected close range objects. (Polish Title: Automatyczna rekonstrukcja modeli 3D małych obiektów bliskiego zasiegu)

    NASA Astrophysics Data System (ADS)

    Zaweiska, D.

    2013-12-01

    Reconstruction of three-dimensional, realistic models of objects from digital images has been the topic of research in many areas of science for many years. This development is stimulated by new technologies and tools, which appeared recently, such as digital photography, laser scanners, increase in the equipment efficiency and Internet. The objective of this paper is to present results of automatic modeling of selected close range objects, with the use of digital photographs acquired by the Hasselblad H4D50 camera. The author's software tool was utilized for calculations; it performs successive stages of the 3D model creation. The modeling process was presented as the complete process which starts from acquisition of images and which is completed by creation of a photorealistic 3D model in the same software environment. Experiments were performed for selected close range objects, with appropriately arranged image geometry, creating a ring around the measured object. The Area Base Matching (CC/LSM) method, the RANSAC algorithm, with the use of tensor calculus, were utilized form automatic matching of points detected with the SUSAN algorithm. Reconstruction of the surface of model generation is one of the important stages of 3D modeling. Reconstruction of precise surfaces, performed on the basis of a non-organized cloud of points, acquired from automatic processing of digital images, is a difficult task, which has not been finally solved. Creation of poly-angular models, which may meet high requirements concerning modeling and visualization is required in many applications. The polynomial method is usually the best way to precise representation of measurement results, and, at the same time, to achieving the optimum description of the surface. Three algorithm were tested: the volumetric method (VCG), the Poisson method and the Ball pivoting method. Those methods are mostly applied to modeling of uniform grids of points. Results of experiments proved that incorrect

  12. Does scene context always facilitate retrieval of visual object representations?

    PubMed

    Nakashima, Ryoichi; Yokosawa, Kazuhiko

    2011-04-01

    An object-to-scene binding hypothesis maintains that visual object representations are stored as part of a larger scene representation or scene context, and that scene context facilitates retrieval of object representations (see, e.g., Hollingworth, Journal of Experimental Psychology: Learning, Memory and Cognition, 32, 58-69, 2006). Support for this hypothesis comes from data using an intentional memory task. In the present study, we examined whether scene context always facilitates retrieval of visual object representations. In two experiments, we investigated whether the scene context facilitates retrieval of object representations, using a new paradigm in which a memory task is appended to a repeated-flicker change detection task. Results indicated that in normal scene viewing, in which many simultaneous objects appear, scene context facilitation of the retrieval of object representations-henceforth termed object-to-scene binding-occurred only when the observer was required to retain much information for a task (i.e., an intentional memory task).

  13. Preschoolers' Preparation for Retrieval in Object Relocation Tasks.

    ERIC Educational Resources Information Center

    Beal, Carole R.; Fleisig, Wayne E.

    The finding that young children do not prepare markers to help themselves relocate objects after a delay may have resulted from children's misunderstanding of the difficulty of unassisted retrieval. This study examined children's ability to recognize that they should prepare markers in two simplified object relocation tasks after they had been…

  14. The effect of object speed and direction on the performance of 3D speckle tracking using a 3D swept-volume ultrasound probe

    NASA Astrophysics Data System (ADS)

    Harris, Emma J.; Miller, Naomi R.; Bamber, Jeffrey C.; Symonds-Tayler, J. Richard N.; Evans, Philip M.

    2011-11-01

    Three-dimensional (3D) soft tissue tracking using 3D ultrasound is of interest for monitoring organ motion during therapy. Previously we demonstrated feature tracking of respiration-induced liver motion in vivo using a 3D swept-volume ultrasound probe. The aim of this study was to investigate how object speed affects the accuracy of tracking ultrasonic speckle in the absence of any structural information, which mimics the situation in homogenous tissue for motion in the azimuthal and elevational directions. For object motion prograde and retrograde to the sweep direction of the transducer, the spatial sampling frequency increases or decreases with object speed, respectively. We examined the effect object motion direction of the transducer on tracking accuracy. We imaged a homogenous ultrasound speckle phantom whilst moving the probe with linear motion at a speed of 0-35 mm s-1. Tracking accuracy and precision were investigated as a function of speed, depth and direction of motion for fixed displacements of 2 and 4 mm. For the azimuthal direction, accuracy was better than 0.1 and 0.15 mm for displacements of 2 and 4 mm, respectively. For a 2 mm displacement in the elevational direction, accuracy was better than 0.5 mm for most speeds. For 4 mm elevational displacement with retrograde motion, accuracy and precision reduced with speed and tracking failure was observed at speeds of greater than 14 mm s-1. Tracking failure was attributed to speckle de-correlation as a result of decreasing spatial sampling frequency with increasing speed of retrograde motion. For prograde motion, tracking failure was not observed. For inter-volume displacements greater than 2 mm, only prograde motion should be tracked which will decrease temporal resolution by a factor of 2. Tracking errors of the order of 0.5 mm for prograde motion in the elevational direction indicates that using the swept probe technology speckle tracking accuracy is currently too poor to track homogenous tissue over

  15. 3-D ion distribution and evolution in storm-time RC Retrieved from TWINS ENA by differential voxel CT technique

    NASA Astrophysics Data System (ADS)

    Ma, S.; Yan, W.; Xu, L.

    2013-12-01

    The quantitative retrieval of the 3-D spatial distribution of the parent energetic ions of ENA from a 2-D ENA image is a quite challenge task. The Two Wide-angle Imaging Neutral-atom Spectrometers (TWINS) mission of NASA is the first constellation to perform stereoscopic magnetospheric imaging of energetic neutral atoms (ENA) from a pair of spacecraft flying on two widely-separated Molniya orbits. TWINS provides a unique opportunity to retrieve the 3-D distribution of ions in the ring current (RC) by using a volumetric pixel (voxel) CT inversion method. In this study the voxel CT method is implemented for a series of differential ENA fluxes averaged over about 6 to 7 sweeps (corresponding to a time period of about 9 min.) at different energy levels ranging from 5 to 100 keV, obtained simultaneously by the two satellites during the main phase of a great magnetic storm with minimum Sym-H of -156 nT on 24-25 October 2011. The data were selected to span a period about 50 minutes during which a large substorm was undergoing its expansion phase first and then recovery. The ENA species of O and H are distinguished for some time-segments by analyzing the signals of pulse heights of second electrons emitted from the carbon foil and impacted on the MCP detector in the TWINS sensors. In order to eliminate the possible influence on retrieval induced by instrument bias error, a differential voxel CT technique is applied. The flux intensity of the ENAs' parent ions in the RC has been obtained as a function of energy, L value, MLT sector and latitude, along with their time evolution during the storm-time substorm expansion phase. Forward calculations proved the reliability of the retrieved results. It shows that the RC is highly asymmetric, with a major concentration in the midnight to dawn sector for equatorial latitudes. Halfway through the substorm expansion there occurred a large enhancement of equatorial ion flux at lower energy (5 keV) in the dusk sector, with narrow extent

  16. True-3D Accentuating of Grids and Streets in Urban Topographic Maps Enhances Human Object Location Memory

    PubMed Central

    Edler, Dennis; Bestgen, Anne-Kathrin; Kuchinke, Lars; Dickmann, Frank

    2015-01-01

    Cognitive representations of learned map information are subject to systematic distortion errors. Map elements that divide a map surface into regions, such as content-related linear symbols (e.g. streets, rivers, railway systems) or additional artificial layers (coordinate grids), provide an orientation pattern that can help users to reduce distortions in their mental representations. In recent years, the television industry has started to establish True-3D (autostereoscopic) displays as mass media. These modern displays make it possible to watch dynamic and static images including depth illusions without additional devices, such as 3D glasses. In these images, visual details can be distributed over different positions along the depth axis. Some empirical studies of vision research provided first evidence that 3D stereoscopic content attracts higher attention and is processed faster. So far, the impact of True-3D accentuating has not yet been explored concerning spatial memory tasks and cartography. This paper reports the results of two empirical studies that focus on investigations whether True-3D accentuating of artificial, regular overlaying line features (i.e. grids) and content-related, irregular line features (i.e. highways and main streets) in official urban topographic maps (scale 1/10,000) further improves human object location memory performance. The memory performance is measured as both the percentage of correctly recalled object locations (hit rate) and the mean distances of correctly recalled objects (spatial accuracy). It is shown that the True-3D accentuating of grids (depth offset: 5 cm) significantly enhances the spatial accuracy of recalled map object locations, whereas the True-3D emphasis of streets significantly improves the hit rate of recalled map object locations. These results show the potential of True-3D displays for an improvement of the cognitive representation of learned cartographic information. PMID:25679208

  17. True-3D accentuating of grids and streets in urban topographic maps enhances human object location memory.

    PubMed

    Edler, Dennis; Bestgen, Anne-Kathrin; Kuchinke, Lars; Dickmann, Frank

    2015-01-01

    Cognitive representations of learned map information are subject to systematic distortion errors. Map elements that divide a map surface into regions, such as content-related linear symbols (e.g. streets, rivers, railway systems) or additional artificial layers (coordinate grids), provide an orientation pattern that can help users to reduce distortions in their mental representations. In recent years, the television industry has started to establish True-3D (autostereoscopic) displays as mass media. These modern displays make it possible to watch dynamic and static images including depth illusions without additional devices, such as 3D glasses. In these images, visual details can be distributed over different positions along the depth axis. Some empirical studies of vision research provided first evidence that 3D stereoscopic content attracts higher attention and is processed faster. So far, the impact of True-3D accentuating has not yet been explored concerning spatial memory tasks and cartography. This paper reports the results of two empirical studies that focus on investigations whether True-3D accentuating of artificial, regular overlaying line features (i.e. grids) and content-related, irregular line features (i.e. highways and main streets) in official urban topographic maps (scale 1/10,000) further improves human object location memory performance. The memory performance is measured as both the percentage of correctly recalled object locations (hit rate) and the mean distances of correctly recalled objects (spatial accuracy). It is shown that the True-3D accentuating of grids (depth offset: 5 cm) significantly enhances the spatial accuracy of recalled map object locations, whereas the True-3D emphasis of streets significantly improves the hit rate of recalled map object locations. These results show the potential of True-3D displays for an improvement of the cognitive representation of learned cartographic information.

  18. Objective and subjective comparison of standard 2-D and fully 3-D reconstructed data on a PET/CT system.

    PubMed

    Strobel, Klaus; Rüdy, Matthias; Treyer, Valerie; Veit-Haibach, Patrick; Burger, Cyrill; Hany, Thomas F

    2007-07-01

    The relative advantage of fully 3-D versus 2-D mode for whole-body imaging is currently the focus of considerable expert debate. The nature of 3-D PET acquisition for FDG PET/CT theoretically allows a shorter scan time and improved efficiency of FDG use than in the standard 2-D acquisition. We therefore objectively and subjectively compared standard 2-D and fully 3-D reconstructed data for FDG PET/CT on a research PET/CT system. In a total of 36 patients (mean 58.9 years, range 17.3-78.9 years; 21 male, 15 female) referred for known or suspected malignancy, FDG PET/CT was performed using a research PET/CT system with advanced detector technology with improved sensitivity and spatial resolution. After 45 min uptake, a low-dose CT (40 mAs) from head to thigh was performed followed by 2-D PET (emission 3 min per field) and 3-D PET (emission 1.5 min per field) with both seven slices overlap to cover the identical anatomical region. Acquisition time was therefore 50% less (seven fields; 21 min vs. 10.5 min). PET data was acquired in a randomized fashion, so in 50% of the cases 2-D data was acquired first. CT data was used for attenuation correction. 2-D (OSEM) and 3-D PET images were iteratively reconstructed. Subjective analysis of 2-D and 3-D images was performed by two readers in a blinded, randomized fashion evaluating the following criteria: sharpness of organs (liver, chest wall/lung), overall image quality and detectability and dignity of each identified lesion. Objective analysis of PET data was investigated measuring maximum standard uptake value with lean body mass (SUV(max,LBM)) of identified lesions. On average, per patient, the SUV(max) was 7.86 (SD 7.79) for 2-D and 6.96 (SD 5.19) for 3-D. On a lesion basis, the average SUV(max) was 7.65 (SD 7.79) for 2-D and 6.75 (SD 5.89) for 3-D. The absolute difference on a paired t-test of SUV 3-D-2-D based on each measured lesion was significant with an average of -0.956 (P=0.002) and an average of -0.884 on a

  19. Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul

    2014-06-01

    3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.

  20. Tomographic active optical trapping of arbitrarily shaped objects by exploiting 3D refractive index maps

    PubMed Central

    Kim, Kyoohyun; Park, YongKeun

    2017-01-01

    Optical trapping can manipulate the three-dimensional (3D) motion of spherical particles based on the simple prediction of optical forces and the responding motion of samples. However, controlling the 3D behaviour of non-spherical particles with arbitrary orientations is extremely challenging, due to experimental difficulties and extensive computations. Here, we achieve the real-time optical control of arbitrarily shaped particles by combining the wavefront shaping of a trapping beam and measurements of the 3D refractive index distribution of samples. Engineering the 3D light field distribution of a trapping beam based on the measured 3D refractive index map of samples generates a light mould, which can manipulate colloidal and biological samples with arbitrary orientations and/or shapes. The present method provides stable control of the orientation and assembly of arbitrarily shaped particles without knowing a priori information about the sample geometry. The proposed method can be directly applied in biophotonics and soft matter physics. PMID:28530232

  1. Flying triangulation--an optical 3D sensor for the motion-robust acquisition of complex objects.

    PubMed

    Ettl, Svenja; Arold, Oliver; Yang, Zheng; Häusler, Gerd

    2012-01-10

    Three-dimensional (3D) shape acquisition is difficult if an all-around measurement of an object is desired or if a relative motion between object and sensor is unavoidable. An optical sensor principle is presented-we call it "flying triangulation"-that enables a motion-robust acquisition of 3D surface topography. It combines a simple handheld sensor with sophisticated registration algorithms. An easy acquisition of complex objects is possible-just by freely hand-guiding the sensor around the object. Real-time feedback of the sequential measurement results enables a comfortable handling for the user. No tracking is necessary. In contrast to most other eligible sensors, the presented sensor generates 3D data from each single camera image.

  2. Automatic object extraction over multiscale edge field for multimedia retrieval.

    PubMed

    Kiranyaz, Serkan; Ferreira, Miguel; Gabbouj, Moncef

    2006-12-01

    In this work, we focus on automatic extraction of object boundaries from Canny edge field for the purpose of content-based indexing and retrieval over image and video databases. A multiscale approach is adopted where each successive scale provides further simplification of the image by removing more details, such as texture and noise, while keeping major edges. At each stage of the simplification, edges are extracted from the image and gathered in a scale-map, over which a perceptual subsegment analysis is performed in order to extract true object boundaries. The analysis is mainly motivated by Gestalt laws and our experimental results suggest a promising performance for main objects extraction, even for images with crowded textural edges and objects with color, texture, and illumination variations. Finally, integrating the whole process as feature extraction module into MUVIS framework allows us to test the mutual performance of the proposed object extraction method and subsequent shape description in the context of multimedia indexing and retrieval. A promising retrieval performance is achieved, and especially in some particular examples, the experimental results show that the proposed method presents such a retrieval performance that cannot be achieved by using other features such as color or texture.

  3. Laser Transfer of Metals and Metal Alloys for Digital Microfabrication of 3D Objects.

    PubMed

    Zenou, Michael; Sa'ar, Amir; Kotler, Zvi

    2015-09-02

    3D copper logos printed on epoxy glass laminates are demonstrated. The structures are printed using laser transfer of molten metal microdroplets. The example in the image shows letters of 50 µm width, with each letter being taller than the last, from a height of 40 µm ('s') to 190 µm ('l'). The scanning microscopy image is taken at a tilt, and the topographic image was taken using interferometric 3D microscopy, to show the effective control of this technique. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Visualizing 3D Objects from 2D Cross Sectional Images Displayed "In-Situ" versus "Ex-Situ"

    ERIC Educational Resources Information Center

    Wu, Bing; Klatzky, Roberta L.; Stetten, George

    2010-01-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to…

  5. Visualizing 3D Objects from 2D Cross Sectional Images Displayed "In-Situ" versus "Ex-Situ"

    ERIC Educational Resources Information Center

    Wu, Bing; Klatzky, Roberta L.; Stetten, George

    2010-01-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to…

  6. 3D modeling of architectural objects from video data obtained with the fixed focal length lens geometry

    NASA Astrophysics Data System (ADS)

    Deliś, Paulina; Kędzierski, Michał; Fryśkowska, Anna; Wilińska, Michalina

    2013-12-01

    The article describes the process of creating 3D models of architectural objects on the basis of video images, which had been acquired by a Sony NEX-VG10E fixed focal length video camera. It was assumed, that based on video and Terrestrial Laser Scanning data it is possible to develop 3D models of architectural objects. The acquisition of video data was preceded by the calibration of video camera. The process of creating 3D models from video data involves the following steps: video frames selection for the orientation process, orientation of video frames using points with known coordinates from Terrestrial Laser Scanning (TLS), generating a TIN model using automatic matching methods. The above objects have been measured with an impulse laser scanner, Leica ScanStation 2. Created 3D models of architectural objects were compared with 3D models of the same objects for which the self-calibration bundle adjustment process was performed. In this order a PhotoModeler Software was used. In order to assess the accuracy of the developed 3D models of architectural objects, points with known coordinates from Terrestrial Laser Scanning were used. To assess the accuracy a shortest distance method was used. Analysis of the accuracy showed that 3D models generated from video images differ by about 0.06 ÷ 0.13 m compared to TLS data. Artykuł zawiera opis procesu opracowania modeli 3D obiektów architektonicznych na podstawie obrazów wideo pozyskanych kamerą wideo Sony NEX-VG10E ze stałoogniskowym obiektywem. Przyjęto założenie, że na podstawie danych wideo i danych z naziemnego skaningu laserowego (NSL) możliwe jest opracowanie modeli 3D obiektów architektonicznych. Pozyskanie danych wideo zostało poprzedzone kalibracją kamery wideo. Model matematyczny kamery był oparty na rzucie perspektywicznym. Proces opracowania modeli 3D na podstawie danych wideo składał się z następujących etapów: wybór klatek wideo do procesu orientacji, orientacja klatek wideo na

  7. Integration of Complex Objects and Transitive Relationships for Information Retrieval.

    ERIC Educational Resources Information Center

    Jarvelin, Kalervo; Niemi, Timo

    1999-01-01

    Shows that in advanced information-retrieval applications capabilities for data aggregation, transitive computation and non-first normal-form relational computation are often necessary at the same time. Topics include complex objects; advanced data models; query languages; query formulation; knowledge representation; and query-language syntax.…

  8. Object recognition memory: neurobiological mechanisms of encoding, consolidation and retrieval.

    PubMed

    Winters, Boyer D; Saksida, Lisa M; Bussey, Timothy J

    2008-07-01

    Tests of object recognition memory, or the judgment of the prior occurrence of an object, have made substantial contributions to our understanding of the nature and neurobiological underpinnings of mammalian memory. Only in recent years, however, have researchers begun to elucidate the specific brain areas and neural processes involved in object recognition memory. The present review considers some of this recent research, with an emphasis on studies addressing the neural bases of perirhinal cortex-dependent object recognition memory processes. We first briefly discuss operational definitions of object recognition and the common behavioural tests used to measure it in non-human primates and rodents. We then consider research from the non-human primate and rat literature examining the anatomical basis of object recognition memory in the delayed nonmatching-to-sample (DNMS) and spontaneous object recognition (SOR) tasks, respectively. The results of these studies overwhelmingly favor the view that perirhinal cortex (PRh) is a critical region for object recognition memory. We then discuss the involvement of PRh in the different stages--encoding, consolidation, and retrieval--of object recognition memory. Specifically, recent work in rats has indicated that neural activity in PRh contributes to object memory encoding, consolidation, and retrieval processes. Finally, we consider the pharmacological, cellular, and molecular factors that might play a part in PRh-mediated object recognition memory. Recent studies in rodents have begun to indicate the remarkable complexity of the neural substrates underlying this seemingly simple aspect of declarative memory.

  9. Influence of the measurement object's reflective properties on the accuracy of array projection-based 3D sensors

    NASA Astrophysics Data System (ADS)

    Heist, Stefan; Kühmstedt, Peter; Notni, Gunther

    2017-05-01

    In order to increase the measurement speed of pattern projection-based three-dimensional (3-D) sensors, in 2014, we introduced the so-called array projector which allows pattern projection at several 1,000 fps. As the patterns are switched by switching on and off the light sources of multiple slide projectors, each pattern originates from a different projection center. This may lead to a 3-D point deviation when measuring glossy objects. In this contribution, we theoretically and experimentally investigate the dependence of this deviation on the measurement object's reflective properties. Furthermore, we propose a procedure for compensating for this deviation.

  10. Spherical blurred shape model for 3-D object and pose recognition: quantitative analysis and HCI applications in smart environments.

    PubMed

    Lopes, Oscar; Reyes, Miguel; Escalera, Sergio; Gonzàlez, Jordi

    2014-12-01

    The use of depth maps is of increasing interest after the advent of cheap multisensor devices based on structured light, such as Kinect. In this context, there is a strong need of powerful 3-D shape descriptors able to generate rich object representations. Although several 3-D descriptors have been already proposed in the literature, the research of discriminative and computationally efficient descriptors is still an open issue. In this paper, we propose a novel point cloud descriptor called spherical blurred shape model (SBSM) that successfully encodes the structure density and local variabilities of an object based on shape voxel distances and a neighborhood propagation strategy. The proposed SBSM is proven to be rotation and scale invariant, robust to noise and occlusions, highly discriminative for multiple categories of complex objects like the human hand, and computationally efficient since the SBSM complexity is linear to the number of object voxels. Experimental evaluation in public depth multiclass object data, 3-D facial expressions data, and a novel hand poses data sets show significant performance improvements in relation to state-of-the-art approaches. Moreover, the effectiveness of the proposal is also proved for object spotting in 3-D scenes and for real-time automatic hand pose recognition in human computer interaction scenarios.

  11. Image Retrieval Method for Multiscale Objects from Optical Colonoscopy Images

    PubMed Central

    Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro; Aoki, Hiroshi; Takeuchi, Ken; Suzuki, Yasuo

    2017-01-01

    Optical colonoscopy is the most common approach to diagnosing bowel diseases through direct colon and rectum inspections. Periodic optical colonoscopy examinations are particularly important for detecting cancers at early stages while still treatable. However, diagnostic accuracy is highly dependent on both the experience and knowledge of the medical doctor. Moreover, it is extremely difficult, even for specialist doctors, to detect the early stages of cancer when obscured by inflammations of the colonic mucosa due to intractable inflammatory bowel diseases, such as ulcerative colitis. Thus, to assist the UC diagnosis, it is necessary to develop a new technology that can retrieve similar cases of diagnostic target image from cases in the past that stored the diagnosed images with various symptoms of colonic mucosa. In order to assist diagnoses with optical colonoscopy, this paper proposes a retrieval method for colonoscopy images that can cope with multiscale objects. The proposed method can retrieve similar colonoscopy images despite varying visible sizes of the target objects. Through three experiments conducted with real clinical colonoscopy images, we demonstrate that the method is able to retrieve objects of any visible size and any location at a high level of accuracy. PMID:28255295

  12. 360 degree realistic 3D image display and image processing from real objects

    NASA Astrophysics Data System (ADS)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-12-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  13. Mobile object retrieval in server-based image databases

    NASA Astrophysics Data System (ADS)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  14. Simulated and Real Sheet-of-Light 3D Object Scanning Using a-Si:H Thin Film PSD Arrays.

    PubMed

    Contreras, Javier; Tornero, Josep; Ferreira, Isabel; Martins, Rodrigo; Gomes, Luis; Fortunato, Elvira

    2015-11-30

    A MATLAB/SIMULINK software simulation model (structure and component blocks) has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector) array concept technology before it is further expanded or developed. This simulation allows changing most of its parameters, such as the number of elements in the PSD array, the direction of vision, the viewing/scanning angle, the object rotation, translation, sample/scan/simulation time, etc. In addition, results show for the first time the possibility of scanning an object in 3D when using an a-Si:H thin film 128 PSD array sensor and hardware/software system. Moreover, this sensor technology is able to perform these scans and render 3D objects at high speeds and high resolutions when using a sheet-of-light laser within a triangulation platform. As shown by the simulation, a substantial enhancement in 3D object profile image quality and realism can be achieved by increasing the number of elements of the PSD array sensor as well as by achieving an optimal position response from the sensor since clearly the definition of the 3D object profile depends on the correct and accurate position response of each detector as well as on the size of the PSD array.

  15. Simulated and Real Sheet-of-Light 3D Object Scanning Using a-Si:H Thin Film PSD Arrays

    PubMed Central

    Contreras, Javier; Tornero, Josep; Ferreira, Isabel; Martins, Rodrigo; Gomes, Luis; Fortunato, Elvira

    2015-01-01

    A MATLAB/SIMULINK software simulation model (structure and component blocks) has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector) array concept technology before it is further expanded or developed. This simulation allows changing most of its parameters, such as the number of elements in the PSD array, the direction of vision, the viewing/scanning angle, the object rotation, translation, sample/scan/simulation time, etc. In addition, results show for the first time the possibility of scanning an object in 3D when using an a-Si:H thin film 128 PSD array sensor and hardware/software system. Moreover, this sensor technology is able to perform these scans and render 3D objects at high speeds and high resolutions when using a sheet-of-light laser within a triangulation platform. As shown by the simulation, a substantial enhancement in 3D object profile image quality and realism can be achieved by increasing the number of elements of the PSD array sensor as well as by achieving an optimal position response from the sensor since clearly the definition of the 3D object profile depends on the correct and accurate position response of each detector as well as on the size of the PSD array. PMID:26633403

  16. Influence of limited random-phase of objects on the image quality of 3D holographic display

    NASA Astrophysics Data System (ADS)

    Ma, He; Liu, Juan; Yang, Minqiang; Li, Xin; Xue, Gaolei; Wang, Yongtian

    2017-02-01

    Limited-random-phase time average method is proposed to suppress the speckle noise of three dimensional (3D) holographic display. The initial phase and the range of the random phase are studied, as well as their influence on the optical quality of the reconstructed images, and the appropriate initial phase ranges on object surfaces are obtained. Numerical simulations and optical experiments with 2D and 3D reconstructed images are performed, where the objects with limited phase range can suppress the speckle noise in reconstructed images effectively. It is expected to achieve high-quality reconstructed images in 2D or 3D display in the future because of its effectiveness and simplicity.

  17. Progress in Understanding the Impacts of 3-D Cloud Structure on MODIS Cloud Property Retrievals for Marine Boundary Layer Clouds

    NASA Technical Reports Server (NTRS)

    Zhang, Zhibo; Werner, Frank; Miller, Daniel; Platnick, Steven; Ackerman, Andrew; DiGirolamo, Larry; Meyer, Kerry; Marshak, Alexander; Wind, Galina; Zhao, Guangyu

    2016-01-01

    Theory: A novel framework based on 2-D Tayler expansion for quantifying the uncertainty in MODIS retrievals caused by sub-pixel reflectance inhomogeneity. (Zhang et al. 2016). How cloud vertical structure influences MODIS LWP retrievals. (Miller et al. 2016). Observation: Analysis of failed MODIS cloud property retrievals. (Cho et al. 2015). Cloud property retrievals from 15m resolution ASTER observations. (Werner et al. 2016). Modeling: LES-Satellite observation simulator (Zhang et al. 2012, Miller et al. 2016).

  18. A Comparative Analysis between Active and Passive Techniques for Underwater 3D Reconstruction of Close-Range Objects

    PubMed Central

    Bianco, Gianfranco; Gallo, Alessandro; Bruno, Fabio; Muzzupappa, Maurizio

    2013-01-01

    In some application fields, such as underwater archaeology or marine biology, there is the need to collect three-dimensional, close-range data from objects that cannot be removed from their site. In particular, 3D imaging techniques are widely employed for close-range acquisitions in underwater environment. In this work we have compared in water two 3D imaging techniques based on active and passive approaches, respectively, and whole-field acquisition. The comparison is performed under poor visibility conditions, produced in the laboratory by suspending different quantities of clay in a water tank. For a fair comparison, a stereo configuration has been adopted for both the techniques, using the same setup, working distance, calibration, and objects. At the moment, the proposed setup is not suitable for real world applications, but it allowed us to conduct a preliminary analysis on the performances of the two techniques and to understand their capability to acquire 3D points in presence of turbidity. The performances have been evaluated in terms of accuracy and density of the acquired 3D points. Our results can be used as a reference for further comparisons in the analysis of other 3D techniques and algorithms. PMID:23966193

  19. A comparative analysis between active and passive techniques for underwater 3D reconstruction of close-range objects.

    PubMed

    Bianco, Gianfranco; Gallo, Alessandro; Bruno, Fabio; Muzzupappa, Maurizio

    2013-08-20

    In some application fields, such as underwater archaeology or marine biology, there is the need to collect three-dimensional, close-range data from objects that cannot be removed from their site. In particular, 3D imaging techniques are widely employed for close-range acquisitions in underwater environment. In this work we have compared in water two 3D imaging techniques based on active and passive approaches, respectively, and whole-field acquisition. The comparison is performed under poor visibility conditions, produced in the laboratory by suspending different quantities of clay in a water tank. For a fair comparison, a stereo configuration has been adopted for both the techniques, using the same setup, working distance, calibration, and objects. At the moment, the proposed setup is not suitable for real world applications, but it allowed us to conduct a preliminary analysis on the performances of the two techniques and to understand their capability to acquire 3D points in presence of turbidity. The performances have been evaluated in terms of accuracy and density of the acquired 3D points. Our results can be used as a reference for further comparisons in the analysis of other 3D techniques and algorithms.

  20. Development of 3D interactive visual objects using the Scripps Institution of Oceanography's Visualization Center

    NASA Astrophysics Data System (ADS)

    Kilb, D.; Reif, C.; Peach, C.; Keen, C. S.; Smith, B.; Mellors, R. J.

    2003-12-01

    Within the last year scientists and educators at the Scripps Institution of Oceanography (SIO), the Birch Aquarium at Scripps and San Diego State University have collaborated with education specialists to develop 3D interactive graphic teaching modules for use in the classroom and in teacher workshops at the SIO Visualization center (http://siovizcenter.ucsd.edu). The unique aspect of the SIO Visualization center is that the center is designed around a 120 degree curved Panoram floor-to-ceiling screen (8'6" by 28'4") that immerses viewers in a virtual environment. The center is powered by an SGI 3400 Onyx computer that is more powerful, by an order of magnitude in both speed and memory, than typical base systems currently used for education and outreach presentations. This technology allows us to display multiple 3D data layers (e.g., seismicity, high resolution topography, seismic reflectivity, draped interferometric synthetic aperture radar (InSAR) images, etc.) simultaneously, render them in 3D stereo, and take a virtual flight through the data as dictated on the spot by the user. This system can also render snapshots, images and movies that are too big for other systems, and then export smaller size end-products to more commonly used computer systems. Since early 2002, we have explored various ways to provide informal education and outreach focusing on current research presented directly by the researchers doing the work. The Center currently provides a centerpiece for instruction on southern California seismology for K-12 students and teachers for various Scripps education endeavors. Future plans are in place to use the Visualization Center at Scripps for extended K-12 and college educational programs. In particular, we will be identifying K-12 curriculum needs, assisting with teacher education, developing assessments of our programs and products, producing web-accessible teaching modules and facilitating the development of appropriate teaching tools to be

  1. Time Lapse of World’s Largest 3-D Printed Object

    SciTech Connect

    2016-08-29

    Researchers at the MDF have 3D-printed a large-scale trim tool for a Boeing 777X, the world’s largest twin-engine jet airliner. The additively manufactured tool was printed on the Big Area Additive Manufacturing, or BAAM machine over a 30-hour period. The team used a thermoplastic pellet comprised of 80% ABS plastic and 20% carbon fiber from local material supplier. The tool has proven to decrease time, labor, cost and errors associated with traditional manufacturing techniques and increased energy savings in preliminary testing and will undergo further, long term testing.

  2. Infrared Time Lapse of World’s Largest 3D-Printed Object

    SciTech Connect

    2016-08-29

    Researchers at Oak Ridge National Laboratory have 3D-printed a large-scale trim tool for a Boeing 777X, the world’s largest twin-engine jet airliner. The additively manufactured tool was printed on the Big Area Additive Manufacturing, or BAAM machine over a 30-hour period. The team used a thermoplastic pellet comprised of 80% ABS plastic and 20% carbon fiber from local material supplier. The tool has proven to decrease time, labor, cost and errors associated with traditional manufacturing techniques and increased energy savings in preliminary testing and will undergo further, long term testing.

  3. Flying triangulation - A motion-robust optical 3D sensor for the real-time shape acquisition of complex objects

    NASA Astrophysics Data System (ADS)

    Willomitzer, Florian; Ettl, Svenja; Arold, Oliver; Häusler, Gerd

    2013-05-01

    The three-dimensional shape acquisition of objects has become more and more important in the last years. Up to now, there are several well-established methods which already yield impressive results. However, even under quite common conditions like object movement or a complex shaping, most methods become unsatisfying. Thus, the 3D shape acquisition is still a difficult and non-trivial task. We present our measurement principle "Flying Triangulation" which enables a motion-robust 3D acquisition of complex-shaped object surfaces by a freely movable handheld sensor. Since "Flying Triangulation" is scalable, a whole sensor-zoo for different object sizes is presented. Concluding, an overview of current and future fields of investigation is given.

  4. Producing Science-Ready radar datasets for the retrieval of forest 3D structure: Correcting for terrain topography and temporal changes

    NASA Astrophysics Data System (ADS)

    Simard, M.; Lavalle, M.; Riel, B. V.; Pinto, N.; Dubayah, R.; Hensley, S.; Calderhead, A. I.

    2010-12-01

    We present the results of the 2009-2010 airborne L-band radar and lidar campaigns in boreal, temperate and tropical forests. The main objective is to improve canopy height and biomass retrieval from radar data both radiometrically and interferometrically. To achieve this, we assessed and designed models to compensate for the impact of terrain topography and temporal decorrelation on the radar data. The UAVSAR is an L-band radar capable of repeat-pass interferometry producing fully polarimetric images with a spatial resolution of 5m. The LVIS system is a laser altimeter providing a spatially dense sampling of full waveforms. The lidar data is used to determine radar scattering model parameters as well as validate model predictions. During the campaigns, we also collected weather as well as forest structure data in a total of 95 plots. First, we present science-ready UAVSAR datasets that are radiometrically corrected for terrain topography and vegetation reflectivity pattern. This is a critical step before accurate estimation of forest parameters. We implemented a generic and homomorphic transform that can also handle UAVSAR’s antenna steering capabilities which otherwise introduce significant distortions of the image radiometry. We show results obtained from the radiometric calibration. The improvements on the biomass retrieval are significant. Another method to estimate forest 3D structure is polarimetric interferometry (polinSAR). However, since UAVSAR is a repeat-pass interferometric system, changes in forest canopy between radar acquisitions tend to decorrelate successive images. To quantify temporal decorrelation, we collected four radar datasets within a period of 11 days. The data enabled quantification of the temporal decorrelation and its relationship to weather patterns. To compensate for temporal decorrelation, we developed a polinSAR inversion model that account for the target changes. The canopy height inversion is demonstrated through a forward model

  5. 3D GeoWall Analysis System for Shuttle External Tank Foreign Object Debris Events

    NASA Technical Reports Server (NTRS)

    Brown, Richard; Navard, Andrew; Spruce, Joseph

    2010-01-01

    An analytical, advanced imaging method has been developed for the initial monitoring and identification of foam debris and similar anomalies that occur post-launch in reference to the space shuttle s external tank (ET). Remote sensing technologies have been used to perform image enhancement and analysis on high-resolution, true-color images collected with the DCS 760 Kodak digital camera located in the right umbilical well of the space shuttle. Improvements to the camera, using filters, have added sharpness/definition to the image sets; however, image review/analysis of the ET has been limited by the fact that the images acquired by umbilical cameras during launch are two-dimensional, and are usually nonreferenceable between frames due to rotation translation of the ET as it falls away from the space shuttle. Use of stereo pairs of these images can enable strong visual indicators that can immediately portray depth perception of damaged areas or movement of fragments between frames is not perceivable in two-dimensional images. A stereoscopic image visualization system has been developed to allow 3D depth perception of stereo-aligned image pairs taken from in-flight umbilical and handheld digital shuttle cameras. This new system has been developed to augment and optimize existing 2D monitoring capabilities. Using this system, candidate sequential image pairs are identified for transformation into stereo viewing pairs. Image orientation is corrected using control points (similar points) between frames to place the two images in proper X-Y viewing perspective. The images are then imported into the WallView stereo viewing software package. The collected control points are used to generate a transformation equation that is used to re-project one image and effectively co-register it to the other image. The co-registered, oriented image pairs are imported into a WallView image set and are used as a 3D stereo analysis slide show. Multiple sequential image pairs can be used

  6. Influence of georeference for saturated excess overland flow modelling using 3D volumetric soft geo-objects

    NASA Astrophysics Data System (ADS)

    Izham, Mohamad Yusoff; Muhamad Uznir, Ujang; Alias, Abdul Rahman; Ayob, Katimon; Wan Ruslan, Ismail

    2011-04-01

    Existing 2D data structures are often insufficient for analysing the dynamism of saturation excess overland flow (SEOF) within a basin. Moreover, all stream networks and soil surface structures in GIS must be preserved within appropriate projection plane fitting techniques known as georeferencing. Inclusion of 3D volumetric structure of the current soft geo-objects simulation model would offer a substantial effort towards representing 3D soft geo-objects of SEOF dynamically within a basin by visualising saturated flow and overland flow volume. This research attempts to visualise the influence of a georeference system towards the dynamism of overland flow coverage and total overland flow volume generated from the SEOF process using VSG data structure. The data structure is driven by Green-Ampt methods and the Topographic Wetness Index (TWI). VSGs are analysed by focusing on spatial object preservation techniques of the conformal-based Malaysian Rectified Skew Orthomorphic (MRSO) and the equidistant-based Cassini-Soldner projection plane under the existing geodetic Malaysian Revised Triangulation 1948 (MRT48) and the newly implemented Geocentric Datum for Malaysia (GDM2000) datum. The simulated result visualises deformation of SEOF coverage under different georeference systems via its projection planes, which delineate dissimilar computation of SEOF areas and overland flow volumes. The integration of Georeference, 3D GIS and the saturation excess mechanism provides unifying evidence towards successful landslide and flood disaster management through envisioning the streamflow generating process (mainly SEOF) in a 3D environment.

  7. Tailoring bulk mechanical properties of 3D printed objects of polylactic acid varying internal micro-architecture

    NASA Astrophysics Data System (ADS)

    Malinauskas, Mangirdas; Skliutas, Edvinas; Jonušauskas, Linas; Mizeras, Deividas; Šešok, Andžela; Piskarskas, Algis

    2015-05-01

    Herein we present 3D Printing (3DP) fabrication of structures having internal microarchitecture and characterization of their mechanical properties. Depending on the material, geometry and fill factor, the manufactured objects mechanical performance can be tailored from "hard" to "soft." In this work we employ low-cost fused filament fabrication 3D printer enabling point-by-point structuring of poly(lactic acid) (PLA) with~̴400 µm feature spatial resolution. The chosen architectures are defined as woodpiles (BCC, FCC and 60 deg rotating). The period is chosen to be of 1200 µm corresponding to 800 µm pores. The produced objects structural quality is characterized using scanning electron microscope, their mechanical properties such as flexural modulus, elastic modulus and stiffness are evaluated by measured experimentally using universal TIRAtest2300 machine. Within the limitation of the carried out study we show that the mechanical properties of 3D printed objects can be tuned at least 3 times by only changing the woodpile geometry arrangement, yet keeping the same filling factor and periodicity of the logs. Additionally, we demonstrate custom 3D printed µ-fluidic elements which can serve as cheap, biocompatible and environmentally biodegradable platforms for integrated Lab-On-Chip (LOC) devices.

  8. Neural coding of 3D features of objects for hand action in the parietal cortex of the monkey.

    PubMed Central

    Sakata, H; Taira, M; Kusunoki, M; Murata, A; Tanaka, Y; Tsutsui, K

    1998-01-01

    In our previous studies of hand manipulation task-related neurons, we found many neurons of the parietal association cortex which responded to the sight of three-dimensional (3D) objects. Most of the task-related neurons in the AIP area (the lateral bank of the anterior intraparietal sulcus) were visually responsive and half of them responded to objects for manipulation. Most of these neurons were selective for the 3D features of the objects. More recently, we have found binocular visual neurons in the lateral bank of the caudal intraparietal sulcus (c-IPS area) that preferentially respond to a luminous bar or place at a particular orientation in space. We studied the responses of axis-orientation selective (AOS) neurons and surface-orientation selective (SOS) neurons in this area with stimuli presented on a 3D computer graphics display. The AOS neurons showed a stronger response to elongated stimuli and showed tuning to the orientation of the longitudinal axis. Many of them preferred a tilted stimulus in depth and appeared to be sensitive to orientation disparity and/or width disparity. The SOS neurons showed a stronger response to a flat than to an elongated stimulus and showed tuning to the 3D orientation of the surface. Their responses increased with the width or length of the stimulus. A considerable number of SOS neurons responded to a square in a random dot stereogram and were tuned to orientation in depth, suggesting their sensitivity to the gradient of disparity. We also found several SOS neurons that responded to a square with tilted or slanted contours, suggesting their sensitivity to orientation disparity and/or width disparity. Area c-IPS is likely to send visual signals of the 3D features of an object to area AIP for the visual guidance of hand actions. PMID:9770229

  9. A stroboscopic structured illumination system used in dynamic 3D visualization of high-speed motion object

    NASA Astrophysics Data System (ADS)

    Su, Xianyu; Zhang, Qican; Li, Yong; Xiang, Liqun; Cao, Yiping; Chen, Wenjing

    2005-04-01

    A stroboscopic structured illumination system, which can be used in measurement for 3D shape and deformation of high-speed motion object, is proposed and verified by experiments. The system, present in this paper, can automatically detect the position of high-speed moving object and synchronously control the flash of LED to project a structured optical field onto surface of motion object and the shoot of imaging system to acquire an image of deformed fringe pattern, also can create a signal, set artificially through software, to synchronously control the LED and imaging system to do their job. We experiment on a civil electric fan, successful acquire a serial of instantaneous, sharp and clear images of rotation blade and reconstruct its 3D shapes in difference revolutions.

  10. Sparsity assisted phase retrieval of complex valued objects

    NASA Astrophysics Data System (ADS)

    Gaur, Charu; Khare, Kedar

    2016-04-01

    Iterative phase retrieval of complex valued objects (phase objects) suffers from twin image problem due to the presence of features of image and its complex conjugate in the recovered solution. The twin-image problem becomes more severe when object support is centro-symmetric. In this paper, we demonstrate that by modifying standard Hybrid-Input output (HIO) algorithm using an adaptive sparsity enhancement step, the twin-image problem can be addressed successfully even when the object support is centro-symmetric. Adaptive sparsity enhanced algorithm and numerical simulation for binary as well as gray scale phase objects are presented. The high quality phase recovery results presented here show the effectiveness of adaptive sparsity enhanced algorithm.

  11. Part-based object retrieval in cluttered environment.

    PubMed

    Chi, Yanling; Leung, Maylor K H

    2007-05-01

    A novel local structural approach, which is a sequel to our previous work, is proposed in this paper for object retrieval in a cluttered and occluded environment without identifying the outlines of an object. It works by first extracting consistent and structurally unique local neighborhood from inputs or models and then voting on the optimal matches employing dynamic programming and a novel hypercube-based indexing structure. The proposed concepts have been tested on a database with thousands of images and compared with the six nearest-neighbors shape description with superior results.

  12. VIRO 3D: fast three-dimensional full-body scanning for humans and other living objects

    NASA Astrophysics Data System (ADS)

    Stein, Norbert; Minge, Bernhard

    1998-03-01

    The development of a family of partial and whole body scanners provides a complete technology for fully three-dimensional and contact-free scans on human bodies or other living objects within seconds. This paper gives insight into the design and the functional principles of the whole body scanner VIRO 3D operating on the basis of the laser split-beam method. The arrangement of up to 24 camera/laser combinations, thus dividing the area into different camera fields and an all- around sensor configuration travelling in vertical direction allow the complete 360-degree-scan of an object within 6 - 20 seconds. Due to a special calibration process the different sensors are matched and the measured data are combined. Up to 10 million 3D measuring points with a resolution of approximately 1 mm are processed in all coordinate axes to generate a 3D model. By means of high-performance processors in combination with real-time image processing chips the image data from almost any number of sensors can be recorded and evaluated synchronously in video real-time. VIRO 3D scanning systems have already been successfully implemented in various applications and will open up new perspectives in different other fields, ranging from industry, orthopaedic medicine, plastic surgery to art and photography.

  13. A HIGHLY COLLIMATED WATER MASER BIPOLAR OUTFLOW IN THE CEPHEUS A HW3d MASSIVE YOUNG STELLAR OBJECT

    SciTech Connect

    Chibueze, James O.; Imai, Hiroshi; Tafoya, Daniel; Omodaka, Toshihiro; Chong, Sze-Ning; Kameya, Osamu; Hirota, Tomoya; Torrelles, Jose M.

    2012-04-01

    We present the results of multi-epoch very long baseline interferometry (VLBI) water (H{sub 2}O) maser observations carried out with the VLBI Exploration of Radio Astrometry toward the Cepheus A HW3d object. We measured for the first time relative proper motions of the H{sub 2}O maser features, whose spatio-kinematics traces a compact bipolar outflow. This outflow looks highly collimated and expanding through {approx}280 AU (400 mas) at a mean velocity of {approx}21 km s{sup -1} ({approx}6 mas yr{sup -1}) without taking into account the turbulent central maser cluster. The opening angle of the outflow is estimated to be {approx}30 Degree-Sign . The dynamical timescale of the outflow is estimated to be {approx}100 years. Our results provide strong support that HW3d harbors an internal massive young star, and the observed outflow could be tracing a very early phase of star formation. We also have analyzed Very Large Array archive data of 1.3 cm continuum emission obtained in 1995 and 2006 toward Cepheus A. The comparative result of the HW3d continuum emission suggests the possibility of the existence of distinct young stellar objects in HW3d and/or strong variability in one of their radio continuum emission components.

  14. Model-based 3-D object recognition using Hermite transform and homotopy techniques

    NASA Astrophysics Data System (ADS)

    Vaz, Richard F.; Cyganski, David; Wright, Charles R.

    1992-02-01

    This paper presents a new method for model-based object recognition and orientation determination which uses a single, comprehensive analytic object model representing the entirety of a suite of images of the object. In this way, object orientation and identity can be directly established from arbitrary views, even though these views are not related by any geometric image transformation. The approach is also applicable to other real and complex- sensed data, such as radar and thermal signatures. The object model is formed from 2-D Hermite function decompositions of an object image expanded about the angles of object rotation by Fourier series. A measure of error between the model and the acquired view is derived as an exact analytic expression, and is minimized over all values of the viewing angle by evaluation of a polynomial system of equations. The roots of this system are obtained via homotopy techniques, and directly provide object identity and orientation information. Results are given which illustrate the performance of this method for noisy real-world images acquired over a single viewing angle variation.

  15. The effects of surface gloss and roughness on color constancy for real 3-D objects.

    PubMed

    Granzier, Jeroen J M; Vergne, Romain; Gegenfurtner, Karl R

    2014-02-21

    Color constancy denotes the phenomenon that the appearance of an object remains fairly stable under changes in illumination and background color. Most of what we know about color constancy comes from experiments using flat, matte surfaces placed on a single plane under diffuse illumination simulated on a computer monitor. Here we investigate whether material properties (glossiness and roughness) have an effect on color constancy for real objects. Subjects matched the color and brightness of cylinders (painted red, green, or blue) illuminated by simulated daylight (D65) or by a reddish light with a Munsell color book illuminated by a tungsten lamp. The cylinders were either glossy or matte and either smooth or rough. The object was placed in front of a black background or a colored checkerboard. We found that color constancy was significantly higher for the glossy objects compared to the matte objects, and higher for the smooth objects compared to the rough objects. This was independent of the background. We conclude that material properties like glossiness and roughness can have significant effects on color constancy.

  16. Visualizing 3D objects from 2D cross sectional images displayed in-situ versus ex-situ.

    PubMed

    Wu, Bing; Klatzky, Roberta L; Stetten, George

    2010-03-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to visualize an object posed in 3D space. Participants used a hand-held tool to reveal a virtual rod as a sequence of cross-sectional images, which were displayed either directly in the space of exploration (in-situ) or displaced to a remote screen (ex-situ). They manipulated a response stylus to match the virtual rod's pitch (vertical slant), yaw (horizontal slant), or both. Consistent with the hypothesis that spatial colocation of image and source object facilitates mental visualization, we found that although single dimensions of slant were judged accurately with both displays, judging pitch and yaw simultaneously produced differences in systematic error between in-situ and ex-situ displays. Ex-situ imaging also exhibited errors such that the magnitude of the response was approximately correct but the direction was reversed. Regression analysis indicated that the in-situ judgments were primarily based on spatiotemporal visualization, while the ex-situ judgments relied on an ad hoc, screen-based heuristic. These findings suggest that in-situ displays may be useful in clinical practice by reducing error and facilitating the ability of radiologists to visualize 3D anatomy from cross sectional images.

  17. A fast 3-D object recognition algorithm for the vision system of a special-purpose dexterous manipulator

    NASA Technical Reports Server (NTRS)

    Hung, Stephen H. Y.

    1989-01-01

    A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.

  18. Correlation and 3D-tracking of objects by pointing sensors

    DOEpatents

    Griesmeyer, J. Michael

    2017-04-04

    A method and system for tracking at least one object using a plurality of pointing sensors and a tracking system are disclosed herein. In a general embodiment, the tracking system is configured to receive a series of observation data relative to the at least one object over a time base for each of the plurality of pointing sensors. The observation data may include sensor position data, pointing vector data and observation error data. The tracking system may further determine a triangulation point using a magnitude of a shortest line connecting a line of sight value from each of the series of observation data from each of the plurality of sensors to the at least one object, and perform correlation processing on the observation data and triangulation point to determine if at least two of the plurality of sensors are tracking the same object. Observation data may also be branched, associated and pruned using new incoming observation data.

  19. Recognition of 3-D symmetric objects from range images in automated assembly tasks

    NASA Technical Reports Server (NTRS)

    Alvertos, Nicolas; Dcunha, Ivan

    1990-01-01

    A new technique is presented for the three dimensional recognition of symmetric objects from range images. Beginning from the implicit representation of quadrics, a set of ten coefficients is determined for symmetric objects like spheres, cones, cylinders, ellipsoids, and parallelepipeds. Instead of using these ten coefficients trying to fit them to smooth surfaces (patches) based on the traditional way of determining curvatures, a new approach based on two dimensional geometry is used. For each symmetric object, a unique set of two dimensional curves is obtained from the various angles at which the object is intersected with a plane. Using the same ten coefficients obtained earlier and based on the discriminant method, each of these curves is classified as a parabola, circle, ellipse, or hyperbola. Each symmetric object is found to possess a unique set of these two dimensional curves whereby it can be differentiated from the others. It is shown that instead of using the three dimensional discriminant which involves evaluation of the rank of its matrix, it is sufficient to use the two dimensional discriminant which only requires three arithmetic operations.

  20. Robust 3D object localization and pose estimation for random bin picking with the 3DMaMa algorithm

    NASA Astrophysics Data System (ADS)

    Skotheim, Øystein; Thielemann, Jens T.; Berge, Asbjørn; Sommerfelt, Arne

    2010-02-01

    Enabling robots to automatically locate and pick up randomly placed and oriented objects from a bin is an important challenge in factory automation, replacing tedious and heavy manual labor. A system should be able to recognize and locate objects with a predefined shape and estimate the position with the precision necessary for a gripping robot to pick it up. We describe a system that consists of a structured light instrument for capturing 3D data and a robust approach for object location and pose estimation. The method does not depend on segmentation of range images, but instead searches through pairs of 2D manifolds to localize candidates for object match. This leads to an algorithm that is not very sensitive to scene complexity or the number of objects in the scene. Furthermore, the strategy for candidate search is easily reconfigurable to arbitrary objects. Experiments reported in this paper show the utility of the method on a general random bin picking problem, in this paper exemplified by localization of car parts with random position and orientation. Full pose estimation is done in less than 380 ms per image. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  1. Detection of hidden objects using a real-time 3-D millimeter-wave imaging system

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon, Avihai; Levanon, Assaf; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, N. S.

    2014-10-01

    Millimeter (mm)and sub-mm wavelengths or terahertz (THz) band have several properties that motivate their use in imaging for security applications such as recognition of hidden objects, dangerous materials, aerosols, imaging through walls as in hostage situations, and also in bad weather conditions. There is no known ionization hazard for biological tissue, and atmospheric degradation of THz radiation is relatively low for practical imaging distances. We recently developed a new technology for the detection of THz radiation. This technology is based on very inexpensive plasma neon indicator lamps, also known as Glow Discharge Detector (GDD), that can be used as very sensitive THz radiation detectors. Using them, we designed and constructed a Focal Plane Array (FPA) and obtained recognizable2-dimensional THz images of both dielectric and metallic objects. Using THz wave it is shown here that even concealed weapons made of dielectric material can be detected. An example is an image of a knife concealed inside a leather bag and also under heavy clothing. Three-dimensional imaging using radar methods can enhance those images since it can allow the isolation of the concealed objects from the body and environmental clutter such as nearby furniture or other people. The GDDs enable direct heterodyning between the electric field of the target signal and the reference signal eliminating the requirement for expensive mixers, sources, and Low Noise Amplifiers (LNAs).We expanded the ability of the FPA so that we are able to obtain recognizable 2-dimensional THz images in real time. We show here that the THz detection of objects in three dimensions, using FMCW principles is also applicable in real time. This imaging system is also shown here to be capable of imaging objects from distances allowing standoff detection of suspicious objects and humans from large distances.

  2. Microwave and camera sensor fusion for the shape extraction of metallic 3D space objects

    NASA Technical Reports Server (NTRS)

    Shaw, Scott W.; Defigueiredo, Rui J. P.; Krishen, Kumar

    1989-01-01

    The vacuum of space presents special problems for optical image sensors. Metallic objects in this environment can produce intense specular reflections and deep shadows. By combining the polarized RCS with an incomplete camera image, it has become possible to better determine the shape of some simple three-dimensional objects. The radar data are used in an iterative procedure that generates successive approximations to the target shape by minimizing the error between computed scattering cross-sections and the observed radar returns. Favorable results have been obtained for simulations and experiments reconstructing plates, ellipsoids, and arbitrary surfaces.

  3. Microwave and camera sensor fusion for the shape extraction of metallic 3D space objects

    NASA Technical Reports Server (NTRS)

    Shaw, Scott W.; Defigueiredo, Rui J. P.; Krishen, Kumar

    1989-01-01

    The vacuum of space presents special problems for optical image sensors. Metallic objects in this environment can produce intense specular reflections and deep shadows. By combining the polarized RCS with an incomplete camera image, it has become possible to better determine the shape of some simple three-dimensional objects. The radar data are used in an iterative procedure that generates successive approximations to the target shape by minimizing the error between computed scattering cross-sections and the observed radar returns. Favorable results have been obtained for simulations and experiments reconstructing plates, ellipsoids, and arbitrary surfaces.

  4. The dorsal stream contribution to phonological retrieval in object naming.

    PubMed

    Schwartz, Myrna F; Faseyitan, Olufunsho; Kim, Junghoon; Coslett, H Branch

    2012-12-01

    Meaningful speech, as exemplified in object naming, calls on knowledge of the mappings between word meanings and phonological forms. Phonological errors in naming (e.g. GHOST named as 'goath') are commonly seen in persisting post-stroke aphasia and are thought to signal impairment in retrieval of phonological form information. We performed a voxel-based lesion-symptom mapping analysis of 1718 phonological naming errors collected from 106 individuals with diverse profiles of aphasia. Voxels in which lesion status correlated with phonological error rates localized to dorsal stream areas, in keeping with classical and contemporary brain-language models. Within the dorsal stream, the critical voxels were concentrated in premotor cortex, pre- and postcentral gyri and supramarginal gyrus with minimal extension into auditory-related posterior temporal and temporo-parietal cortices. This challenges the popular notion that error-free phonological retrieval requires guidance from sensory traces stored in posterior auditory regions and points instead to sensory-motor processes located further anterior in the dorsal stream. In a separate analysis, we compared the lesion maps for phonological and semantic errors and determined that there was no spatial overlap, demonstrating that the brain segregates phonological and semantic retrieval operations in word production.

  5. 3D Cloud Radiative Effects on Aerosol Optical Thickness Retrievals in Cumulus Cloud Fields in the Biomass Burning Region in Brazil

    NASA Technical Reports Server (NTRS)

    Wen, Guo-Yong; Marshak, Alexander; Cahalan, Robert F.

    2004-01-01

    Aerosol amount in clear regions of a cloudy atmosphere is a critical parameter in studying the interaction between aerosols and clouds. Since the global cloud cover is about 50%, cloudy scenes are often encountered in any satellite images. Aerosols are more or less transparent, while clouds are extremely reflective in the visible spectrum of solar radiation. The radiative transfer in clear-cloudy condition is highly three- dimensional (3D). This paper focuses on estimating the 3D effects on aerosol optical thickness retrievals using Monte Carlo simulations. An ASTER image of cumulus cloud fields in the biomass burning region in Brazil is simulated in this study. The MODIS products (i-e., cloud optical thickness, particle effective radius, cloud top pressure, surface reflectance, etc.) are used to construct the cloud property and surface reflectance fields. To estimate the cloud 3-D effects, we assume a plane-parallel stratification of aerosol properties in the 60 km x 60 km ASTER image. The simulated solar radiation at the top of the atmosphere is compared with plane-parallel calculations. Furthermore, the 3D cloud radiative effects on aerosol optical thickness retrieval are estimated.

  6. 3D Cloud Radiative Effects on Aerosol Optical Thickness Retrievals in Cumulus Cloud Fields in the Biomass Burning Region in Brazil

    NASA Technical Reports Server (NTRS)

    Wen, Guo-Yong; Marshak, Alexander; Cahalan, Robert F.

    2004-01-01

    Aerosol amount in clear regions of a cloudy atmosphere is a critical parameter in studying the interaction between aerosols and clouds. Since the global cloud cover is about 50%, cloudy scenes are often encountered in any satellite images. Aerosols are more or less transparent, while clouds are extremely reflective in the visible spectrum of solar radiation. The radiative transfer in clear-cloudy condition is highly three- dimensional (3D). This paper focuses on estimating the 3D effects on aerosol optical thickness retrievals using Monte Carlo simulations. An ASTER image of cumulus cloud fields in the biomass burning region in Brazil is simulated in this study. The MODIS products (i-e., cloud optical thickness, particle effective radius, cloud top pressure, surface reflectance, etc.) are used to construct the cloud property and surface reflectance fields. To estimate the cloud 3-D effects, we assume a plane-parallel stratification of aerosol properties in the 60 km x 60 km ASTER image. The simulated solar radiation at the top of the atmosphere is compared with plane-parallel calculations. Furthermore, the 3D cloud radiative effects on aerosol optical thickness retrieval are estimated.

  7. A 3D object-based model to simulate highly-heterogeneous, coarse, braided river deposits

    NASA Astrophysics Data System (ADS)

    Huber, E.; Huggenberger, P.; Caers, J.

    2016-12-01

    There is a critical need in hydrogeological modeling for geologically more realistic representation of the subsurface. Indeed, widely-used representations of the subsurface heterogeneity based on smooth basis functions such as cokriging or the pilot-point approach fail at reproducing the connectivity of high permeable geological structures that control subsurface solute transport. To realistically model the connectivity of high permeable structures of coarse, braided river deposits, multiple-point statistics and object-based models are promising alternatives. We therefore propose a new object-based model that, according to a sedimentological model, mimics the dominant processes of floodplain dynamics. Contrarily to existing models, this object-based model possesses the following properties: (1) it is consistent with field observations (outcrops, ground-penetrating radar data, etc.), (2) it allows different sedimentological dynamics to be modeled that result in different subsurface heterogeneity patterns, (3) it is light in memory and computationally fast, and (4) it can be conditioned to geophysical data. In this model, the main sedimentological elements (scour fills with open-framework-bimodal gravel cross-beds, gravel sheet deposits, open-framework and sand lenses) and their internal structures are described by geometrical objects. Several spatial distributions are proposed that allow to simulate the horizontal position of the objects on the floodplain as well as the net rate of sediment deposition. The model is grid-independent and any vertical section can be computed algebraically. Furthermore, model realizations can serve as training images for multiple-point statistics. The significance of this model is shown by its impact on the subsurface flow distribution that strongly depends on the sedimentological dynamics modeled. The code will be provided as a free and open-source R-package.

  8. Evaluation methods for retrieving information from interferograms of biomedical objects

    NASA Astrophysics Data System (ADS)

    Podbielska, Halina; Rottenkolber, Matthias

    1996-04-01

    Interferograms in the form of fringe patterns can be produced in two-beam interferometers, holographic or speckle interferometers, in setups realizing moire techniques or in deflectometers. Optical metrology based on the principle of interference can be applied as a testing tool in biomedical research. By analyzing of the fringe pattern images, information about the shape or mechanical behavior of the object under study can be retrieved. Here, some of the techniques for creating fringe pattern images were presented along with methods of analysis. Intensity based analysis as well as methods of phase measurements, are mentioned. Applications of inteferometric methods, especially in the field of experimental orthopedics, endoscopy and ophthalmology are pointed out.

  9. SigVox - A 3D feature matching algorithm for automatic street object recognition in mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Wang, Jinhu; Lindenbergh, Roderik; Menenti, Massimo

    2017-06-01

    Urban road environments contain a variety of objects including different types of lamp poles and traffic signs. Its monitoring is traditionally conducted by visual inspection, which is time consuming and expensive. Mobile laser scanning (MLS) systems sample the road environment efficiently by acquiring large and accurate point clouds. This work proposes a methodology for urban road object recognition from MLS point clouds. The proposed method uses, for the first time, shape descriptors of complete objects to match repetitive objects in large point clouds. To do so, a novel 3D multi-scale shape descriptor is introduced, that is embedded in a workflow that efficiently and automatically identifies different types of lamp poles and traffic signs. The workflow starts by tiling the raw point clouds along the scanning trajectory and by identifying non-ground points. After voxelization of the non-ground points, connected voxels are clustered to form candidate objects. For automatic recognition of lamp poles and street signs, a 3D significant eigenvector based shape descriptor using voxels (SigVox) is introduced. The 3D SigVox descriptor is constructed by first subdividing the points with an octree into several levels. Next, significant eigenvectors of the points in each voxel are determined by principal component analysis (PCA) and mapped onto the appropriate triangle of a sphere approximating icosahedron. This step is repeated for different scales. By determining the similarity of 3D SigVox descriptors between candidate point clusters and training objects, street furniture is automatically identified. The feasibility and quality of the proposed method is verified on two point clouds obtained in opposite direction of a stretch of road of 4 km. 6 types of lamp pole and 4 types of road sign were selected as objects of interest. Ground truth validation showed that the overall accuracy of the ˜170 automatically recognized objects is approximately 95%. The results demonstrate

  10. Demonstration of an Ultrasonic Method for 3-D Visualization of Shallow Buried Underwater Objects

    DTIC Science & Technology

    2011-07-01

    with the X-Y positioning system attached. It is composed of an X-Y gantry system operated by underwater servo motors controlled by the operator’s...user interface errors there are in the software. The test was setup by placing the system over a tank of water containing know objects (Figure 4). The...Requirements Evaluation of all the user interface controls and outputs 3.4.3 Success Criteria 100% error free, all identified bugs have been

  11. A roadmap to global illumination in 3D scenes: solutions for GPU object recognition applications

    NASA Astrophysics Data System (ADS)

    Picos, Kenia; Díaz-Ramírez, Victor H.; Tapia, Juan J.

    2014-09-01

    Light interactions with matter is of remarkable complexity. An adequate modeling of global illumination is a vastly studied topic since the beginning of computer graphics, and still is an unsolved problem. The rendering equation for global illumination is based of refraction and reflection of light in interaction with matter within an environment. This physical process possesses a high computational complexity when implemented in a digital computer. The appearance of an object depends on light interactions with the surface of the material, such as emission, scattering, and absorption. Several image-synthesis methods have been used to realistically render the appearance of light incidence on an object. Recent global illumination algorithms employ mathematical models and computational strategies that improve the efficiency of the simulation solution. This work presents a review the state of the art of global illumination algorithms and focuses on the efficiency of the solution in a computational implementation in a graphics processing unit. A reliable system is developed to simulate realistics scenes in the context of real-time object recognition under different lighting conditions. Computer simulations results are presented and discussed in terms of discrimination capability, and robustness to additive noise, when considering several lighting model reflections and multiple light sources.

  12. Color Perception of 3D Objects: Constancy with Respect To Variation of Surface Gloss.

    PubMed

    Xiao, Bei; Brainard, David H

    2006-01-01

    What determines the color appearance of real objects viewed under natural conditions? The light reflected from different locations on a single object can vary enormously. This variation is enhanced when the material properties of the object are changed from matte to glossy. Yet humans have no trouble assigning a color name to most things. We studied how people perceive the color of spheres in complex scenes. Observers viewed graphics simulations of a three-dimensional scene containing two spheres, test and match. The observer's task was to adjust the match sphere until its color appearance was the same as that of the test sphere. The match sphere was always matte, and observers varied its color by changing the simulated spectral reflectance function. The surface gloss of the test spheres was varied across conditions. The data show that for fixed test sphere body reflectance, color appearance depends on surface gloss. This effect is small, however, compared to the variation that would be expected if observers simply matched the average of the light reflected from the test.

  13. 3D profile measurements of objects by using zero order Generalized Morse Wavelet

    NASA Astrophysics Data System (ADS)

    Kocahan, Özlem; Durmuş, ćaǧla; Elmas, Merve Naz; Coşkun, Emre; Tiryaki, Erhan; Özder, Serhat

    2017-02-01

    Generalized Morse wavelets are proposed to evaluate the phase information from projected fringe pattern with the spatial carrier frequency in the x direction. The height profile of the object is determined through the phase change distribution by using the phase of the continuous wavelet transform. The phase distribution is extracted from the optical fringe pattern choosing zero order Generalized Morse Wavelet (GMW) as a mother wavelet. In this study, standard fringe projection technique is used for obtaining images. Experimental results for the GMW phase method are compared with the results of Morlet and Paul wavelet transform.

  14. Single Quantum Dot with Microlens and 3D-Printed Micro-objective as Integrated Bright Single-Photon Source.

    PubMed

    Fischbach, Sarah; Schlehahn, Alexander; Thoma, Alexander; Srocka, Nicole; Gissibl, Timo; Ristok, Simon; Thiele, Simon; Kaganskiy, Arsenty; Strittmatter, André; Heindel, Tobias; Rodt, Sven; Herkommer, Alois; Giessen, Harald; Reitzenstein, Stephan

    2017-06-21

    Integrated single-photon sources with high photon-extraction efficiency are key building blocks for applications in the field of quantum communications. We report on a bright single-photon source realized by on-chip integration of a deterministic quantum dot microlens with a 3D-printed multilens micro-objective. The device concept benefits from a sophisticated combination of in situ 3D electron-beam lithography to realize the quantum dot microlens and 3D femtosecond direct laser writing for creation of the micro-objective. In this way, we obtain a high-quality quantum device with broadband photon-extraction efficiency of (40 ± 4)% and high suppression of multiphoton emission events with g((2))(τ = 0) < 0.02. Our results highlight the opportunities that arise from tailoring the optical properties of quantum emitters using integrated optics with high potential for the further development of plug-and-play fiber-coupled single-photon sources.

  15. Single Quantum Dot with Microlens and 3D-Printed Micro-objective as Integrated Bright Single-Photon Source

    PubMed Central

    2017-01-01

    Integrated single-photon sources with high photon-extraction efficiency are key building blocks for applications in the field of quantum communications. We report on a bright single-photon source realized by on-chip integration of a deterministic quantum dot microlens with a 3D-printed multilens micro-objective. The device concept benefits from a sophisticated combination of in situ 3D electron-beam lithography to realize the quantum dot microlens and 3D femtosecond direct laser writing for creation of the micro-objective. In this way, we obtain a high-quality quantum device with broadband photon-extraction efficiency of (40 ± 4)% and high suppression of multiphoton emission events with g(2)(τ = 0) < 0.02. Our results highlight the opportunities that arise from tailoring the optical properties of quantum emitters using integrated optics with high potential for the further development of plug-and-play fiber-coupled single-photon sources. PMID:28670600

  16. 3D shape measurement of objects with high dynamic range of surface reflectivity

    NASA Astrophysics Data System (ADS)

    Liu, Gui-Hua; Liu, Xian-Yong; Feng, Quan-Yuan

    2011-08-01

    This paper presents a method that allows a conventional dual-camera structured light system to directly acquire the three-dimensional shape of the whole surface of an object with high dynamic range of surface reflectivity. To reduce the degradation in area-based correlation caused by specular highlights and diffused darkness, we first disregard these highly specular and dark pixels. Then, to solve this problem and further obtain unmatched area data, this binocular vision system was also used as two camera-projector monocular systems operated from different viewing angles at the same time to fill in missing data of the binocular reconstruction. This method involves producing measurable images by integrating such techniques as multiple exposures and high dynamic range imaging to ensure the capture of high-quality phase of each point. An image-segmentation technique was also introduced to distinguish which monocular system is suitable to reconstruct a certain lost point accurately. Our experiments demonstrate that these techniques extended the measurable areas on the high dynamic range of surface reflectivity such as specular objects or scenes with high contrast to the whole projector-illuminated field.

  17. A novel approach to determining the three-dimensional location of microscopic objects with applications to 3D particle tracking

    NASA Astrophysics Data System (ADS)

    Ram, Sripad; Chao, Jerry; Prabhat, Prashant; Ward, E. Sally; Ober, Raimund J.

    2007-02-01

    Recent technological advances have rendered widefield fluorescence microscopy as an invaluable tool to image fast dynamics of trafficking events in two dimensions (i.e., in the plane of focus). Three-dimensional trafficking events are studied by sequentially imaging different planes within the specimen by moving the plane of focus with a focusing device. However, these devices are typically slow and hence when the cell is being imaged at one focal plane, important events could be missed at other focal planes. To overcome this limitation, we recently developed a novel imaging technique called multifocal plane microscopy that enables the simultaneous imaging of multiple focal planes within the sample. Here, by using tools of information theory, we present a quantitative evaluation of this technique in the context of 3D particle tracking. We calculate the Fisher information matrix for the problem of determining the 3D location of an object that is imaged on a multifocal plane setup. In this way, we derive a lower bound on the accuracy with which the object can be localized in 3D. We illustrate our results by considering the object of interest to be a single molecule. It is well known that a conventional wide.eld microscope has poor depth discrimination capability and therefore there exists signi.cant uncertainty in determining the axial location of the object, especially when it is close to the plane of focus. Our results predict that the multifocal plane microscope setup offers improved accuracy in determining the axial location of objects than a conventional widefield microscope.

  18. Sparse color interest points for image retrieval and object categorization.

    PubMed

    Stöttinger, Julian; Hanbury, Allan; Sebe, Nicu; Gevers, Theo

    2012-05-01

    Interest point detection is an important research area in the field of image processing and computer vision. In particular, image retrieval and object categorization heavily rely on interest point detection from which local image descriptors are computed for image matching. In general, interest points are based on luminance, and color has been largely ignored. However, the use of color increases the distinctiveness of interest points. The use of color may therefore provide selective search reducing the total number of interest points used for image matching. This paper proposes color interest points for sparse image representation. To reduce the sensitivity to varying imaging conditions, light-invariant interest points are introduced. Color statistics based on occurrence probability lead to color boosted points, which are obtained through saliency-based feature selection. Furthermore, a principal component analysis-based scale selection method is proposed, which gives a robust scale estimation per interest point. From large-scale experiments, it is shown that the proposed color interest point detector has higher repeatability than a luminance-based one. Furthermore, in the context of image retrieval, a reduced and predictable number of color features show an increase in performance compared to state-of-the-art interest points. Finally, in the context of object recognition, for the Pascal VOC 2007 challenge, our method gives comparable performance to state-of-the-art methods using only a small fraction of the features, reducing the computing time considerably.

  19. Visual retrieval of known objects using supplementary depth data

    NASA Astrophysics Data System (ADS)

    Śluzek, Andrzej

    2016-06-01

    A simple modification of typical content-based visual information retrieval (CBVIR) techniques (e.g. MSER keypoints represented by SIFT descriptors quantized into sufficiently large vocabularies) is discussed and preliminarily evaluated. By using the approximate depths (as the supplementary data) of the detected keypoints, we can significantly improve credibility of keypoint matching so that known objects (i.e. objects for which exemplary images are available in the database) can be detected at low computational costs. Thus, the method can be particularly useful in real-time applications of machine vision systems (e.g. in intelligent robotic devices). The paper presents theoretical model of the method and provides exemplary results for selected scenarios.

  20. Calculations of Arctic ozone chemistry using objectively analyzed data in a 3-D CTM

    NASA Technical Reports Server (NTRS)

    Kaminski, J. W.; Mcconnell, J. C.; Sandilands, J. W.

    1994-01-01

    A three-dimensional chemical transport model (CTM) (Kaminski, 1992) has been used to study the evolution of the Arctic ozone during the winter of 1992. The continuity equation has been solved using a spectral method with Rhomboidal 15 (R15) truncation and leap-frog time stepping. Six-hourly meteorological fields from the Canadian Meteorological Center global objective analysis routines run at T79 were degraded to the model resolution. In addition, they were interpolated to the model time grid and were used to drive the model from the surface to 10 mb. In the model, processing of Cl(x) occurred over Arctic latitudes but some of the initial products were still present by mid-January. Also, the large amounts of ClO formed in the model in early January were converted to ClNO3. The results suggest that the model resolution may be insufficient to resolve the details of the Arctic transport during this time period. In particular, the wind field does not move the ClO(x) 'cloud' to the south over Europe as seen in the MLS measurements.

  1. Visualizing 3D objects from 2D cross sectional images displayed in-situ versus ex-situ

    PubMed Central

    Wu, Bing; Klatzky, Roberta L.; Stetten, George

    2010-01-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people’s ability to integrate spatial information over a series of cross sectional images, in order to visualize an object posed in 3D space. Participants used a hand-held tool to reveal a virtual rod as a sequence of cross-sectional images, which were displayed either directly in the space of exploration (in-situ) or displaced to a remote screen (ex-situ). They manipulated a response stylus to match the virtual rod’s pitch (vertical slant), yaw (horizontal slant), or both. Consistent with the hypothesis that spatial co-location of image and source object facilitates mental visualization, we found that although single dimensions of slant were judged accurately with both displays, judging pitch and yaw simultaneously produced differences in systematic error between in-situ and ex-situ displays. Ex-situ imaging also exhibited errors such that the magnitude of the response was approximately correct but the direction was reversed. Regression analysis indicated that the in-situ judgments were primarily based on spatio-temporal visualization, while the ex-situ judgments relied on an ad hoc, screen-based heuristic. These findings suggest that in-situ displays may be useful in clinical practice by reducing error and facilitating the ability of radiologists to visualize 3D anatomy from cross sectional images. PMID:20350043

  2. Segmentation of complex objects with non-spherical topologies from volumetric medical images using 3D livewire

    NASA Astrophysics Data System (ADS)

    Poon, Kelvin; Hamarneh, Ghassan; Abugharbieh, Rafeef

    2007-03-01

    Segmentation of 3D data is one of the most challenging tasks in medical image analysis. While reliable automatic methods are typically preferred, their success is often hindered by poor image quality and significant variations in anatomy. Recent years have thus seen an increasing interest in the development of semi-automated segmentation methods that combine computational tools with intuitive, minimal user interaction. In an earlier work, we introduced a highly-automated technique for medical image segmentation, where a 3D extension of the traditional 2D Livewire was proposed. In this paper, we present an enhanced and more powerful 3D Livewire-based segmentation approach with new features designed to primarily enable the handling of complex object topologies that are common in biological structures. The point ordering algorithm we proposed earlier, which automatically pairs up seedpoints in 3D, is improved in this work such that multiple sets of points are allowed to simultaneously exist. Point sets can now be automatically merged and split to accommodate for the presence of concavities, protrusions, and non-spherical topologies. The robustness of the method is further improved by extending the 'turtle algorithm', presented earlier, by using a turtle-path pruning step. Tests on both synthetic and real medical images demonstrate the efficiency, reproducibility, accuracy, and robustness of the proposed approach. Among the examples illustrated is the segmentation of the left and right ventricles from a T1-weighted MRI scan, where an average task time reduction of 84.7% was achieved when compared to a user performing 2D Livewire segmentation on every slice.

  3. Reference Frames and 3-D Shape Perception of Pictured Objects: On Verticality and Viewpoint-From-Above

    PubMed Central

    van Doorn, Andrea J.; Wagemans, Johan

    2016-01-01

    Research on the influence of reference frames has generally focused on visual phenomena such as the oblique effect, the subjective visual vertical, the perceptual upright, and ambiguous figures. Another line of research concerns mental rotation studies in which participants had to discriminate between familiar or previously seen 2-D figures or pictures of 3-D objects and their rotated versions. In the present study, we disentangled the influence of the environmental and the viewer-centered reference frame, as classically done, by comparing the performances obtained in various picture and participant orientations. However, this time, the performance is the pictorial relief: the probed 3-D shape percept of the depicted object reconstructed from the local attitude settings of the participant. Comparisons between the pictorial reliefs based on different picture and participant orientations led to two major findings. First, in general, the pictorial reliefs were highly similar if the orientation of the depicted object was vertical with regard to the environmental or the viewer-centered reference frame. Second, a viewpoint-from-above interpretation could almost completely account for the shears occurring between the pictorial reliefs. More specifically, the shears could largely be considered as combinations of slants generated from the viewpoint-from-above, which was determined by the environmental as well as by the viewer-centered reference frame. PMID:27433329

  4. Off-axis phase-only holograms of 3D objects using accelerated point-based Fresnel diffraction algorithm

    NASA Astrophysics Data System (ADS)

    Zeng, Zhenxiang; Zheng, Huadong; Yu, Yingjie; Asundi, Anand K.

    2017-06-01

    A method for calculating off-axis phase-only holograms of three-dimensional (3D) object using accelerated point-based Fresnel diffraction algorithm (PB-FDA) is proposed. The complex amplitude of the object points on the z-axis in hologram plane is calculated using Fresnel diffraction formula, called principal complex amplitudes (PCAs). The complex amplitudes of those off-axis object points of the same depth can be obtained by 2D shifting of PCAs. In order to improve the calculating speed of the PB-FDA, the convolution operation based on fast Fourier transform (FFT) is used to calculate the holograms rather than using the point-by-point spatial 2D shifting of the PCAs. The shortest recording distance of the PB-FDA is analyzed in order to remove the influence of multiple-order images in reconstructed images. The optimal recording distance of the PB-FDA is also analyzed to improve the quality of reconstructed images. Numerical reconstructions and optical reconstructions with a phase-only spatial light modulator (SLM) show that holographic 3D display is feasible with the proposed algorithm. The proposed PB-FDA can also avoid the influence of the zero-order image introduced by SLM in optical reconstructed images.

  5. Parallel phase-shifting digital holography and its application to high-speed 3D imaging of dynamic object

    NASA Astrophysics Data System (ADS)

    Awatsuji, Yasuhiro; Xia, Peng; Wang, Yexin; Matoba, Osamu

    2016-03-01

    Digital holography is a technique of 3D measurement of object. The technique uses an image sensor to record the interference fringe image containing the complex amplitude of object, and numerically reconstructs the complex amplitude by computer. Parallel phase-shifting digital holography is capable of accurate 3D measurement of dynamic object. This is because this technique can reconstruct the complex amplitude of object, on which the undesired images are not superimposed, form a single hologram. The undesired images are the non-diffraction wave and the conjugate image which are associated with holography. In parallel phase-shifting digital holography, a hologram, whose phase of the reference wave is spatially and periodically shifted every other pixel, is recorded to obtain complex amplitude of object by single-shot exposure. The recorded hologram is decomposed into multiple holograms required for phase-shifting digital holography. The complex amplitude of the object is free from the undesired images is reconstructed from the multiple holograms. To validate parallel phase-shifting digital holography, a high-speed parallel phase-shifting digital holography system was constructed. The system consists of a Mach-Zehnder interferometer, a continuous-wave laser, and a high-speed polarization imaging camera. Phase motion picture of dynamic air flow sprayed from a nozzle was recorded at 180,000 frames per second (FPS) have been recorded by the system. Also phase motion picture of dynamic air induced by discharge between two electrodes has been recorded at 1,000,000 FPS, when high voltage was applied between the electrodes.

  6. Workflows and the Role of Images for Virtual 3d Reconstruction of no Longer Extant Historic Objects

    NASA Astrophysics Data System (ADS)

    Münster, S.

    2013-07-01

    3D reconstruction technologies have gained importance as tools for the research and visualization of no longer extant historic objects during the last decade. Within such reconstruction processes, visual media assumes several important roles: as the most important sources especially for a reconstruction of no longer extant objects, as a tool for communication and cooperation within the production process, as well as for a communication and visualization of results. While there are many discourses about theoretical issues of depiction as sources and as visualization outcomes of such projects, there is no systematic research about the importance of depiction during a 3D reconstruction process and based on empirical findings. Moreover, from a methodological perspective, it would be necessary to understand which role visual media plays during the production process and how it is affected by disciplinary boundaries and challenges specific to historic topics. Research includes an analysis of published work and case studies investigating reconstruction projects. This study uses methods taken from social sciences to gain a grounded view of how production processes would take place in practice and which functions and roles images would play within them. For the investigation of these topics, a content analysis of 452 conference proceedings and journal articles related to 3D reconstruction modeling in the field of humanities has been completed. Most of the projects described in those publications dealt with data acquisition and model building for existing objects. Only a small number of projects focused on structures that no longer or never existed physically. Especially that type of project seems to be interesting for a study of the importance of pictures as sources and as tools for interdisciplinary cooperation during the production process. In the course of the examination the authors of this paper applied a qualitative content analysis for a sample of 26 previously

  7. If you watch it move, you'll recognize it in 3D: Transfer of depth cues between encoding and retrieval.

    PubMed

    Papenmeier, Frank; Schwan, Stephan

    2016-02-01

    Viewing objects with stereoscopic displays provides additional depth cues through binocular disparity supporting object recognition. So far, it was unknown whether this results from the representation of specific stereoscopic information in memory or a more general representation of an object's depth structure. Therefore, we investigated whether continuous object rotation acting as depth cue during encoding results in a memory representation that can subsequently be accessed by stereoscopic information during retrieval. In Experiment 1, we found such transfer effects from continuous object rotation during encoding to stereoscopic presentations during retrieval. In Experiments 2a and 2b, we found that the continuity of object rotation is important because only continuous rotation and/or stereoscopic depth but not multiple static snapshots presented without stereoscopic information caused the extraction of an object's depth structure into memory. We conclude that an object's depth structure and not specific depth cues are represented in memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Oxidised zirconium versus cobalt alloy bearing surfaces in total knee arthroplasty: 3D laser scanning of retrieved polyethylene inserts.

    PubMed

    Anderson, F L; Koch, C N; Elpers, M E; Wright, T M; Haas, S B; Heyse, T J

    2017-06-01

    We sought to establish whether an oxidised zirconium (OxZr) femoral component causes less loss of polyethylene volume than a cobalt alloy (CoCr) femoral component in total knee arthroplasty. A total of 20 retrieved tibial inserts that had articulated with OxZr components were matched with 20 inserts from CoCr articulations for patient age, body mass index, length of implantation, and revision diagnosis. Changes in dimensions of the articular surfaces were compared with those of pristine inserts using laser scanning. The differences in volume between the retrieved and pristine surfaces of the two groups were calculated and compared. The loss of polyethylene volume was 122 mm(3) (standard deviation (sd) 87) in the OxZr group and 170 mm(3) (sd 96) in the CoCr group (p = 0.033). The volume loss in the OxZr group was also lower in the medial (72 mm(3) (sd 67) versus 92 mm(3) (sd 60); p = 0.096) and lateral (49 mm(3) (sd 36) versus 79 mm(3) (sd 61); p = 0.096) compartments separately, but these differences were not significant. Our results corroborate earlier findings from in vitro testing and visual retrieval analysis which suggest that polyethylene volume loss is lower with OxZr femoral components. Since both OxZr and CoCr are hard surfaces that would be expected to create comparable amounts of polyethylene creep, the differences in volume loss may reflect differences in the in vivo wear of these inserts. Cite this article: Bone Joint J 2017;99-B:793-8. ©2017 The British Editorial Society of Bone & Joint Surgery.

  9. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands.

    PubMed

    Mateo, Carlos M; Gil, Pablo; Torres, Fernando

    2016-05-05

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object's surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand's fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.

  10. Spun-wrapped aligned nanofiber (SWAN) lithography for fabrication of micro/nano-structures on 3D objects

    NASA Astrophysics Data System (ADS)

    Ye, Zhou; Nain, Amrinder S.; Behkam, Bahareh

    2016-06-01

    Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for fabrication of multiscale (nano to microscale) structures on 3D objects without restriction on substrate material and geometry. SWAN lithography combines precise deposition of polymeric nanofiber masks, in aligned single or multilayer configurations, with well-controlled solvent vapor treatment and etching processes to enable high throughput (>10-7 m2 s-1) and large-area fabrication of sub-50 nm to several micron features with high pattern fidelity. Using this technique, we demonstrate whole-surface nanopatterning of bulk and thin film surfaces of cubes, cylinders, and hyperbola-shaped objects that would be difficult, if not impossible to achieve with existing methods. We demonstrate that the fabricated feature size (b) scales with the fiber mask diameter (D) as b1.5 ~ D. This scaling law is in excellent agreement with theoretical predictions using the Johnson, Kendall, and Roberts (JKR) contact theory, thus providing a rational design framework for fabrication of systems and devices that require precisely designed multiscale features.Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for

  11. Shape and motion reconstruction from 3D-to-1D orthographically projected data via object-image relations.

    PubMed

    Ferrara, Matthew; Arnold, Gregory; Stuff, Mark

    2009-10-01

    This paper describes an invariant-based shape- and motion reconstruction algorithm for 3D-to-1D orthographically projected range data taken from unknown viewpoints. The algorithm exploits the object-image relation that arises in echo-based range data and represents a simplification and unification of previous work in the literature. Unlike one proposed approach, this method does not require uniqueness constraints, which makes its algorithmic form independent of the translation removal process (centroid removal, range alignment, etc.). The new algorithm, which simultaneously incorporates every projection and does not use an initialization in the optimization process, requires fewer calculations and is more straightforward than the previous approach. Additionally, the new algorithm is shown to be the natural extension of the approach developed by Tomasi and Kanade for 3D-to-2D orthographically projected data and is applied to a realistic inverse synthetic aperture radar imaging scenario, as well as experiments with varying amounts of aperture diversity and noise.

  12. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands

    PubMed Central

    Mateo, Carlos M.; Gil, Pablo; Torres, Fernando

    2016-01-01

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments. PMID

  13. Rapid and retrievable recording of big data of time-lapse 3D shadow images of microbial colonies.

    PubMed

    Ogawa, Hiroyuki; Nasu, Senshi; Takeshige, Motomu; Saito, Mikako; Matsuoka, Hideaki

    2015-05-15

    We formerly developed an automatic colony count system based on the time-lapse shadow image analysis (TSIA). Here this system has been upgraded and applied to practical rapid decision. A microbial sample was spread on/in an agar plate with 90 mm in diameter as homogeneously as possible. We could obtain the results with several strains that most of colonies appeared within a limited time span. Consequently the number of colonies reached a steady level (Nstdy) and then unchanged until the end of long culture time to give the confirmed value (Nconf). The equivalence of Nstdy and Nconf as well as the difference of times for Nstdy and Nconf determinations were statistically significant at p < 0.001. Nstdy meets the requirement of practical routines treating a large number of plates. The difference of Nstdy and Nconf, if any, may be elucidated by means of retrievable big data. Therefore Nconf is valid for official documentation.

  14. Acquiring multi-viewpoint image of 3D object for integral imaging using synthetic aperture phase-shifting digital holography

    NASA Astrophysics Data System (ADS)

    Jeong, Min-Ok; Kim, Nam; Park, Jae-Hyeung; Jeon, Seok-Hee; Gil, Sang-Keun

    2009-02-01

    We propose a method generating elemental images for the auto-stereoscopic three-dimensional display technique, integral imaging, using phase-shifting digital holography. Phase shifting digital holography is a way recording the digital hologram by changing phase of the reference beam and extracting the complex field of the object beam. Since all 3D information is captured by the phase-shifting digital holography, the elemental images for any specifications of the lens array can be generated from single phase-shifting digital holography. We expanded the viewing angle of the generated elemental image by using the synthetic aperture phase-shifting digital hologram. The principle of the proposed method is verified experimentally.

  15. Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution.

    PubMed

    Meddens, Marjolein B M; Liu, Sheng; Finnegan, Patrick S; Edwards, Thayne L; James, Conrad D; Lidke, Keith A

    2016-06-01

    We have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single molecule super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet.

  16. Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution

    SciTech Connect

    Meddens, Marjolein B. M.; Liu, Sheng; Finnegan, Patrick S.; Edwards, Thayne L.; James, Conrad D.; Lidke, Keith A.

    2016-01-01

    Here, we have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single molecule super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet.

  17. Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution

    DOE PAGES

    Meddens, Marjolein B. M.; Liu, Sheng; Finnegan, Patrick S.; ...

    2016-01-01

    Here, we have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single moleculemore » super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet.« less

  18. Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution

    PubMed Central

    Meddens, Marjolein B. M.; Liu, Sheng; Finnegan, Patrick S.; Edwards, Thayne L.; James, Conrad D.; Lidke, Keith A.

    2016-01-01

    We have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single molecule super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet. PMID:27375939

  19. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  20. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  1. Neural network system for 3-D object recognition and pose estimation from a single arbitrary 2-D view

    NASA Astrophysics Data System (ADS)

    Khotanzad, Alireza R.; Liou, James H.

    1992-09-01

    In this paper, a robust, and fast system for recognition as well as pose estimation of a 3-D object from a single 2-D perspective of it taken from an arbitrary viewpoint is developed. The approach is invariant to location, orientation, and scale of the object in the perspective. The silhouette of the object in the 2-D perspective is first normalized with respect to location and scale. A set of rotation invariant features derived from complex and orthogonal pseudo- Zernike moments of the image are then extracted. The next stage includes a bank of multilayer feed-forward neural networks (NN) each of which classifies the extracted features. The training set for these nets consists of perspective views of each object taken from several different viewing angles. The NNs in the bank differ in the size of their hidden layer nodes as well as their initial conditions but receive the same input. The classification decisions of all the nets are combined through a majority voting scheme. It is shown that this collective decision making yields better results compared to a single NN operating alone. After the object is classified, two of its pose parameters, namely elevation and aspect angles, are estimated by another module of NNs in a two-stage process. The first stage identifies the likely region of the space that the object is being viewed from. In the second stage, an NN estimator for the identified region is used to compute the pose angles. Extensive experimental studies involving clean and noisy images of seven military ground vehicles are carried out. The performance is compared to two other traditional methods, namely a nearest neighbor rule and a binary decision tree classifier and it is shown that our approach has major advantages over them.

  2. Topomorphologic Separation of Fused Isointensity Objects via Multiscale Opening: Separating Arteries and Veins in 3-D Pulmonary CT

    PubMed Central

    Gao, Zhiyun; Alford, Sara K.; Sonka, Milan; Hoffman, Eric A.

    2015-01-01

    A novel multiscale topomorphologic approach for opening of two isointensity objects fused at different locations and scales is presented and applied to separating arterial and venous trees in 3-D pulmonary multidetector X-ray computed tomography (CT) images. Initialized with seeds, the two isointensity objects (arteries and veins) grow iteratively while maintaining their spatial exclusiveness and eventually form two mutually disjoint objects at convergence. The method is intended to solve the following two fundamental challenges: how to find local size of morphological operators and how to trace continuity of locally separated regions. These challenges are met by combining fuzzy distance transform (FDT), a morphologic feature with a topologic fuzzy connectivity, and a new morphological reconstruction step to iteratively open finer and finer details starting at large scales and progressing toward smaller scales. The method employs efficient user intervention at locations where local morphological separability assumption does not hold due to imaging ambiguities or any other reason. The approach has been validated on mathematically generated tubular objects and applied to clinical pulmonary noncontrast CT data for separating arteries and veins. The tradeoff between accuracy and the required user intervention for the method has been quantitatively examined by comparing with manual outlining. The experimental study, based on a blind seed selection strategy, has demonstrated that above 95% accuracy may be achieved using 25–40 seeds for each of arteries and veins. Our method is very promising for semiautomated separation of arteries and veins in pulmonary CT images even when there is no object-specific intensity variation at conjoining locations. PMID:20199919

  3. Validating Air Force Weather Satellite Retrieved 3D Cloud Products against Independent Ground and Space-Based Assets

    NASA Astrophysics Data System (ADS)

    Nobis, T. E.; Conner, M. D.

    2016-12-01

    Air Force Weather (AFW) has documented requirements for global cloud analyses and forecasts to support DoD missions around the world. Cloud analyses are constructed using passive cloud detection algorithms from 17 different near real time satellite sources. The algorithms are run on individual satellite transmissions at native satellite resolution in near real time. These native resolution products are then used to construct an hourly global merge on a 24km grid. AFW has also recently started creation of a time-delayed global cloud reanalysis to produce a `best possible' analysis for climatology and verification purposes. Cloud forecasts include global short-range cloud forecasts created using advection techniques as well as statistically post-processed cloud forecast products derived from various global and regional numerical weather forecast models. The result is a mix of cloud products covering different spatial and temporal resolutions with varying latency requirements. AFW has started to aggressively benchmark the performance of their current capabilities. Cloud information collected from so called `active' sensors on the ground at the DOE-ARM sites and from space by such instruments as CloudSat, CALIPSO and CATS are being utilized to characterize the performance of AFW products derived largely by passive means. The goal is to understand the performance of the 3D cloud analysis and forecast products of today to help shape the requirements and standards for a future Numerical Weather Model driven cloud analysis and forecast system driven by advanced 4DVAR techniques. This presentation will present selected results from these benchmarking efforts and highlight insights and observations between passively and actively derived observations and the impacts of varying spatial and temporal depictions of clouds.

  4. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  5. Bringing Cosmic Objects Down to Earth: An Overview of 3D Modelling and Printing in Astronomy and Astronomy Communication

    NASA Astrophysics Data System (ADS)

    Arcand, K.; Megan, W.; DePasquale, J.; Jubett, A.; Edmonds, P.; DiVona, K.

    2017-09-01

    Three-dimensional (3D) modelling is more than just good fun, it offers a new vehicle to represent and understand scientific data and gives experts and non-experts alike the ability to manipulate models and gain new perspectives on data. This article explores the use of 3D modelling and printing in astronomy and astronomy communication and looks at some of the practical challenges, and solutions, to using 3D modelling, visualisation and printing in this way.

  6. Retrieval of Shape Characteristics for Buried Objects with GPR Monitoring

    NASA Astrophysics Data System (ADS)

    Soldovieri, F.; Comite, D.; Galli, A.; Valerio, G.; Barone, P. M.; Lauro, S. E.; Mattei, E.; Pettinelli, E.

    2012-04-01

    Information retrieval on the location and the geometrical features (dimensions and shape) of buried objects is of fundamental importance in geosciences areas involving environmental protection, mine clearance, archaeological investigations, space and planetary exploration, and so forth. Among the different non-invasive sensing techniques usually employed to achieve this kind of information, those based on ground-penetrating-radar (GPR) instruments are well-established and suitable to the mentioned purposes [1]. In this context, our interest in the present work is specifically focused on testing the potential performance of typical GPR instruments by means of appropriate data processing. It will be shown in particular to what extent the use of a suitable "microwave tomographic approach" [2] is able to furnish a shape estimation of the targets, possibly recognizing different kinds of canonical geometries, even having reduced cross sections and in critical conditions, where the scatterer size is comparable with resolution limits imposed by the usual measurement configurations. Our study starts by obtaining the typical "direct" information from the GPR techniques that is the scattered field in subsurface environments under the form of radargrams. In order to get a wide variety of scenarios for the operating conditions, this goal is achieved by means of two different and independent approaches [3]. One approach is based on direct measurements through an experimental laboratory setup: commercial GPR instruments (typically bistatic configurations operating around 1 GHz frequency range) are used to collect radargram profiles by investigating an artificial basin filled of liquid and/or granular materials (sand, etc.), in which targets (having different constitutive parameters, shape, and dimensions) can be buried. The other approach is based on numerical GPR simulations by means of a commercial CAD electromagnetic tool (CST), whose suitable implementation and data

  7. A generalized fuzzy mathematical morphology and its application in robust 2-D and 3-D object representation.

    PubMed

    Chatzis, V; Pitas, I

    2000-01-01

    In this paper, the generalized fuzzy mathematical morphology (GFMM) is proposed, based on a novel definition of the fuzzy inclusion indicator (FII). FII is a fuzzy set used as a measure of the inclusion of a fuzzy set into another, that is proposed to be a fuzzy set. It is proven that the FII obeys a set of axioms, which are proposed to be extensions of the known axioms that any inclusion indicator should obey, and which correspond to the desirable properties of any mathematical morphology operation. The GFMM provides a very powerful and flexible tool for morphological operations. The binary and grayscale mathematical morphologies can be considered as special cases of the proposed GFMM. An application for robust skeletonization and shape decomposition of two-dimensional (2-D) and three-dimensional (3-D) objects is presented. Simulation examples show that the object reconstruction from their skeletal subsets that can be achieved by using the GFMM is better than by using the binary mathematical morphology in most cases. Furthermore, the use of the GFMM for skeletonization and shape decomposition preserves the shape and the location of the skeletal subsets and spines.

  8. Retrieval and reconsolidation of object recognition memory are independent processes in the perirhinal cortex.

    PubMed

    Balderas, I; Rodriguez-Ortiz, C J; Bermudez-Rattoni, F

    2013-12-03

    Reconsolidation refers to the destabilization/re-stabilization process upon memory reactivation. However, the parameters needed to induce reconsolidation remain unclear. Here we evaluated the capacity of memory retrieval to induce reconsolidation of object recognition memory in rats. To assess whether retrieval is indispensable to trigger reconsolidation, we injected muscimol in the perirhinal cortex to block retrieval, and anisomycin (ani) to impede reconsolidation. We observed that ani impaired reconsolidation in the absence of retrieval. Therefore, stored memory underwent reconsolidation even though it was not recalled. These results indicate that retrieval and reconsolidation of object recognition memory are independent processes.

  9. 3D shape and eccentricity measurements of fast rotating rough objects by two mutually tilted interference fringe systems

    NASA Astrophysics Data System (ADS)

    Czarske, J. W.; Kuschmierz, R.; Günther, P.

    2013-06-01

    Precise measurements of distance, eccentricity and 3D-shape of fast moving objects such as turning parts of lathes, gear shafts, magnetic bearings, camshafts, crankshafts and rotors of vacuum pumps are on the one hand important tasks. On the other hand they are big challenges, since contactless precise measurement techniques are required. Optical techniques are well suitable for distance measurements of non-moving surfaces. However, measurements of laterally fast moving surfaces are still challenging. For such tasks the laser Doppler distance sensor technique was invented by the TU Dresden some years ago. This technique has been realized by two mutually tilted interference fringe systems, where the distance is coded in the phase difference between the generated interference signals. However, due to the speckle effect different random envelopes and phase jumps of the interference signals occur. They disturb the phase difference estimation between the interference signals. In this paper, we will report on a scientific breakthrough on the measurement uncertainty budget which has been achieved recently. Via matching of the illumination and receiving optics the measurement uncertainty of the displacement and distance can be reduced by about one magnitude. For displacement measurements of a recurring rough surface a standard deviation of 110 nm were attained at lateral velocities of 5 m / s. Due to the additionally measured lateral velocity and the rotational speed, the two-dimensional shape of rotating objects is calculated. The three-dimensional shape can be conducted by employment of a line camera. Since the measurement uncertainty of the displacement, vibration, distance, eccentricity, and shape is nearly independent of the lateral surface velocity, this technique is predestined for fast-rotating objects. Especially it can be advantageously used for the quality control of workpieces inside of a lathe towards the reduction of process tolerances, installation times and

  10. Physical security and cyber security issues and human error prevention for 3D printed objects: detecting the use of an incorrect printing material

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2017-06-01

    A wide variety of characteristics of 3D printed objects have been linked to impaired structural integrity and use-efficacy. The printing material can also have a significant impact on the quality, utility and safety characteristics of a 3D printed object. Material issues can be created by vendor issues, physical security issues and human error. This paper presents and evaluates a system that can be used to detect incorrect material use in a 3D printer, using visible light imaging. Specifically, it assesses the ability to ascertain the difference between materials of different color and different types of material with similar coloration.

  11. Encoding, learning, and spatial updating of multiple object locations specified by 3-D sound, spatial language, and vision.

    PubMed

    Klatzky, Roberta L; Lippa, Yvonne; Loomis, Jack M; Golledge, Reginald G

    2003-03-01

    Participants standing at an origin learned the distance and azimuth of target objects that were specified by 3-D sound, spatial language, or vision. We tested whether the ensuing target representations functioned equivalently across modalities for purposes of spatial updating. In experiment 1, participants localized targets by pointing to each and verbalizing its distance, both directly from the origin and at an indirect waypoint. In experiment 2, participants localized targets by walking to each directly from the origin and via an indirect waypoint. Spatial updating bias was estimated by the spatial-coordinate difference between indirect and direct localization; noise from updating was estimated by the difference in variability of localization. Learning rate and noise favored vision over the two auditory modalities. For all modalities, bias during updating tended to move targets forward, comparably so for three and five targets and for forward and rightward indirect-walking directions. Spatial language produced additional updating bias and noise from updating. Although spatial representations formed from language afford updating, they do not function entirely equivalently to those from intrinsically spatial modalities.

  12. Object-constrained meshless deformable algorithm for high speed 3D nonrigid registration between CT and CBCT

    SciTech Connect

    Chen Ting; Kim, Sung; Goyal, Sharad; Jabbour, Salma; Zhou Jinghao; Rajagopal, Gunaretnum; Haffty, Bruce; Yue Ning

    2010-01-15

    Purpose: High-speed nonrigid registration between the planning CT and the treatment CBCT data is critical for real time image guided radiotherapy (IGRT) to improve the dose distribution and to reduce the toxicity to adjacent organs. The authors propose a new fully automatic 3D registration framework that integrates object-based global and seed constraints with the grayscale-based ''demons'' algorithm. Methods: Clinical objects were segmented on the planning CT images and were utilized as meshless deformable models during the nonrigid registration process. The meshless models reinforced a global constraint in addition to the grayscale difference between CT and CBCT in order to maintain the shape and the volume of geometrically complex 3D objects during the registration. To expedite the registration process, the framework was stratified into hierarchies, and the authors used a frequency domain formulation to diffuse the displacement between the reference and the target in each hierarchy. Also during the registration of pelvis images, they replaced the air region inside the rectum with estimated pixel values from the surrounding rectal wall and introduced an additional seed constraint to robustly track and match the seeds implanted into the prostate. The proposed registration framework and algorithm were evaluated on 15 real prostate cancer patients. For each patient, prostate gland, seminal vesicle, bladder, and rectum were first segmented by a radiation oncologist on planning CT images for radiotherapy planning purpose. The same radiation oncologist also manually delineated the tumor volumes and critical anatomical structures in the corresponding CBCT images acquired at treatment. These delineated structures on the CBCT were only used as the ground truth for the quantitative validation, while structures on the planning CT were used both as the input to the registration method and the ground truth in validation. By registering the planning CT to the CBCT, a

  13. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  14. Retrieving Leaf Area Index and Foliage Profiles Through Voxelized 3-D Forest Reconstruction Using Terrestrial Full-Waveform and Dual-Wavelength Echidna Lidars

    NASA Astrophysics Data System (ADS)

    Strahler, A. H.; Yang, X.; Li, Z.; Schaaf, C.; Wang, Z.; Yao, T.; Zhao, F.; Saenz, E.; Paynter, I.; Douglas, E. S.; Chakrabarti, S.; Cook, T.; Martel, J.; Howe, G.; Hewawasam, K.; Jupp, D.; Culvenor, D.; Newnham, G.; Lowell, J.

    2013-12-01

    Measuring and monitoring canopy biophysical parameters provide a baseline for carbon flux studies related to deforestation and disturbance in forest ecosystems. Terrestrial full-waveform lidar systems, such as the Echidna Validation Instrument (EVI) and its successor Dual-Wavelength Echidna Lidar (DWEL), offer rapid, accurate, and automated characterization of forest structure. In this study, we apply a methodology based on voxelized 3-D forest reconstructions built from EVI and DWEL scans to directly estimate two important biophysical parameters: Leaf Area Index (LAI) and foliage profile. Gap probability, apparent reflectance, and volume associated with the laser pulse footprint at the observed range are assigned to the foliage scattering events in the reconstructed point cloud. Leaf angle distribution is accommodated with a simple model based on gap probability with zenith angle as observed in individual scans of the stand. The DWEL instrument, which emits simultaneous laser pulses at 1064 nm and 1548 nm wavelengths, provides a better capability to separate trunk and branch hits from foliage hits due to water absorption by leaf cellular contents at 1548 nm band. We generate voxel datasets of foliage points using a classification methodology solely based on pulse shape for scans collected by EVI and with pulse shape and band ratio for scans collected by DWEL. We then compare the LAIs and foliage profiles retrieved from the voxel datasets of the two instruments at the same red fir site in Sierra National Forest, CA, with each other and with observations from airborne and field measurements. This study further tests the voxelization methodology in obtaining LAI and foliage profiles that are largely free of clumping effects and returns from woody materials in the canopy. These retrievals can provide a valuable 'ground-truth' validation data source for large-footprint spaceborne or airborne lidar systems retrievals.

  15. Phase and amplitude retrieval of objects embedded in a sinusoidal background from its diffraction pattern

    SciTech Connect

    Wu, Chu; Ng, Tuck Wah; Neild, Adrian

    2010-04-01

    Efforts of phase and amplitude retrieval from diffraction patterns have almost exclusively been applied for nonperiodic objects. We investigated the quality of retrieval of nonperiodic objects embedded in a sinusoidal background, using the approach of iterative hybrid input-output with oversampling. Two strategies were employed; one by filtering in the frequency domain prior to phase retrieval, and the other by filtering the phase or amplitude image after retrieval. Results obtained indicate better outcomes with the latter approach provided detector noise is not excessive.

  16. FINAL INTERIM REPORT, CANDIDATE SITES, MACHINES IN USE, DATA STORAGE AND TRANSMISSION METHODS: TESTING FEASIBILITY OF 3D ULTRASOUND DATA ACQUISITION AND RELIABILITY OF DATA RETRIEVAL FROM STORED 3D IMAGES

    EPA Science Inventory

    The purpose of this Work Assignment, 02-03, is to examine the feasibility of collecting transmitting, and analyzing 3-D ultrasound data in the context of a multi-center study of pregnant women. The study will also examine the reliability of measurements obtained from 3-D images< ...

  17. FINAL INTERIM REPORT, CANDIDATE SITES, MACHINES IN USE, DATA STORAGE AND TRANSMISSION METHODS: TESTING FEASIBILITY OF 3D ULTRASOUND DATA ACQUISITION AND RELIABILITY OF DATA RETRIEVAL FROM STORED 3D IMAGES

    EPA Science Inventory

    The purpose of this Work Assignment, 02-03, is to examine the feasibility of collecting transmitting, and analyzing 3-D ultrasound data in the context of a multi-center study of pregnant women. The study will also examine the reliability of measurements obtained from 3-D images< ...

  18. Development of a 3D WebGIS System for Retrieving and Visualizing CityGML Data Based on their Geometric and Semantic Characteristics by Using Free and Open Source Technology

    NASA Astrophysics Data System (ADS)

    Pispidikis, I.; Dimopoulou, E.

    2016-10-01

    CityGML is considered as an optimal standard for representing 3D city models. However, international experience has shown that visualization of the latter is quite difficult to be implemented on the web, due to the large size of data and the complexity of CityGML. As a result, in the context of this paper, a 3D WebGIS application is developed in order to successfully retrieve and visualize CityGML data in accordance with their respective geometric and semantic characteristics. Furthermore, the available web technologies and the architecture of WebGIS systems are investigated, as provided by international experience, in order to be utilized in the most appropriate way for the purposes of this paper. Specifically, a PostgreSQL/ PostGIS Database is used, in compliance with the 3DCityDB schema. At Server tier, Apache HTTP Server and GeoServer are utilized, while a Server Side programming language PHP is used. At Client tier, which implemented the interface of the application, the following technologies were used: JQuery, AJAX, JavaScript, HTML5, WebGL and Ol3-Cesium. Finally, it is worth mentioning that the application's primary objectives are a user-friendly interface and a fully open source development.

  19. Phase-retrieval ghost imaging of complex-valued objects

    SciTech Connect

    Gong Wenlin; Han Shensheng

    2010-08-15

    An imaging approach, based on ghost imaging, is reported to recover a pure-phase object or a complex-valued object. Our analytical results, which are backed up by numerical simulations, demonstrate that both the complex-valued object and its amplitude-dependent part can be separately and nonlocally reconstructed using this approach. Both effects influencing the quality of reconstructed images and methods to further improve the imaging quality are also discussed.

  20. EM modelling of arbitrary shaped anisotropic dielectric objects using an efficient 3D leapfrog scheme on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Gansen, A.; Hachemi, M. El; Belouettar, S.; Hassan, O.; Morgan, K.

    2016-09-01

    The standard Yee algorithm is widely used in computational electromagnetics because of its simplicity and divergence free nature. A generalization of the classical Yee scheme to 3D unstructured meshes is adopted, based on the use of a Delaunay primal mesh and its high quality Voronoi dual. This allows the problem of accuracy losses, which are normally associated with the use of the standard Yee scheme and a staircased representation of curved material interfaces, to be circumvented. The 3D dual mesh leapfrog-scheme which is presented has the ability to model both electric and magnetic anisotropic lossy materials. This approach enables the modelling of problems, of current practical interest, involving structured composites and metamaterials.

  1. A method of 3D reconstruction via ISAR Sequences based on scattering centers association for space rigid object

    NASA Astrophysics Data System (ADS)

    Li, Gang; Zou, Jiangwei; Xu, Shiyou; Tian, Biao; Chen, Zengping

    2014-10-01

    In this paper the effects of orbits motion makes for scattering centers trajectory is analyzed, and introduced to scattering centers association, as a constraint. A screening method of feature points is presented to analysis the false points of reconstructed result, and the wrong association which lead these false points. The loop iteration between 3D reconstruction and association result makes the precision of final reconstructed result have a further improvement. The simulation data shows the validity of the algorithm.

  2. 3D micro-XRF for cultural heritage objects: new analysis strategies for the investigation of the Dead Sea Scrolls.

    PubMed

    Mantouvalou, Ioanna; Wolff, Timo; Hahn, Oliver; Rabin, Ira; Lühl, Lars; Pagels, Marcel; Malzer, Wolfgang; Kanngiesser, Birgit

    2011-08-15

    A combination of 3D micro X-ray fluorescence spectroscopy (3D micro-XRF) and micro-XRF was utilized for the investigation of a small collection of highly heterogeneous, partly degraded Dead Sea Scroll parchment samples from known excavation sites. The quantitative combination of the two techniques proves to be suitable for the identification of reliable marker elements which may be used for classification and provenance studies. With 3D micro-XRF, the three-dimensional nature, i.e. the depth-resolved elemental composition as well as density variations, of the samples was investigated and bromine could be identified as a suitable marker element. It is shown through a comparison of quantitative and semiquantitative values for the bromine content derived using both techniques that, for elements which are homogeneously distributed in the sample matrix, quantification with micro-XRF using a one-layer model is feasible. Thus, the possibility for routine provenance studies using portable micro-XRF instrumentation on a vast amount of samples, even on site, is obtained through this work.

  3. Retrieval is not necessary to trigger reconsolidation of object recognition memory in the perirhinal cortex

    PubMed Central

    Santoyo-Zedillo, Marianela; Rodriguez-Ortiz, Carlos J.; Chavez-Marchetta, Gianfranco; Bermudez-Rattoni, Federico

    2014-01-01

    Memory retrieval has been considered as requisite to initiate memory reconsolidation; however, some studies indicate that blocking retrieval does not prevent memory from undergoing reconsolidation. Since N-methyl-D-aspartate (NMDA) and α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) glutamate receptors in the perirhinal cortex have been involved in object recognition memory formation, the present study evaluated whether retrieval and reconsolidation are independent processes by manipulating these glutamate receptors. The results showed that AMPA receptor antagonist infusions in the perirhinal cortex blocked retrieval, but did not affect memory reconsolidation, although NMDA receptor antagonist infusions disrupted reconsolidation even if retrieval was blocked. Importantly, neither of these antagonists disrupted short-term memory. These data suggest that memory underwent reconsolidation even in the absence of retrieval. PMID:25128536

  4. Initial Experiences with Retrieving Similar Objects in Simulation Data

    SciTech Connect

    Cheung, S-C S; Kamath, C

    2003-02-21

    Comparing the output of a physics simulation with an experiment, referred to as 'code validation,' is often done by visually comparing the two outputs. In order to determine which simulation is a closer match to the experiment, more quantitative measures are needed. In this paper, we describe our early experiences with this problem by considering the slightly simpler problem of finding objects in a image that are similar to a given query object. Focusing on a dataset from a fluid mixing problem, we report on our experiments with different features that are used to represent the objects of interest in the data. These early results indicate that the features must be chosen carefully to correctly represent the query object and the goal of the similarity search.

  5. The CU 2-D-MAX-DOAS instrument - Part 1: Retrieval of 3-D distributions of NO2 and azimuth-dependent OVOC ratios

    NASA Astrophysics Data System (ADS)

    Ortega, I.; Koenig, T.; Sinreich, R.; Thomson, D.; Volkamer, R.

    2015-06-01

    We present an innovative instrument telescope and describe a retrieval method to probe three-dimensional (3-D) distributions of atmospheric trace gases that are relevant to air pollution and tropospheric chemistry. The University of Colorado (CU) two-dimensional (2-D) multi-axis differential optical absorption spectroscopy (CU 2-D-MAX-DOAS) instrument measures nitrogen dioxide (NO2), formaldehyde (HCHO), glyoxal (CHOCHO), oxygen dimer (O2-O2, or O4), and water vapor (H2O); nitrous acid (HONO), bromine monoxide (BrO), and iodine monoxide (IO) are among other gases that can in principle be measured. Information about aerosols is derived through coupling with a radiative transfer model (RTM). The 2-D telescope has three modes of operation: mode 1 measures solar scattered photons from any pair of elevation angle (-20° < EA < +90° or zenith; zero is to the horizon) and azimuth angle (-180° < AA < +180°; zero being north); mode 2 measures any set of azimuth angles (AAs) at constant elevation angle (EA) (almucantar scans); and mode 3 tracks the direct solar beam via a separate view port. Vertical profiles of trace gases are measured and used to estimate mixing layer height (MLH). Horizontal distributions are then derived using MLH and parameterization of RTM (Sinreich et al., 2013). NO2 is evaluated at different wavelengths (350, 450, and 560 nm), exploiting the fact that the effective path length varies systematically with wavelength. The area probed is constrained by O4 observations at nearby wavelengths and has a diurnal mean effective radius of 7.0 to 25 km around the instrument location; i.e., up to 1960 km2 can be sampled with high time resolution. The instrument was deployed as part of the Multi-Axis DOAS Comparison campaign for Aerosols and Trace gases (MAD-CAT) in Mainz, Germany, from 7 June to 6 July 2013. We present first measurements (modes 1 and 2 only) and describe a four-step retrieval to derive (a) boundary layer vertical profiles and MLH of NO2; (b

  6. Phase retrieval with the reverse projection method in the presence of object's scattering

    NASA Astrophysics Data System (ADS)

    Wang, Zhili; Gao, Kun; Wang, Dajiang

    2017-08-01

    X-ray grating interferometry can provide substantially increased contrast over traditional attenuation-based techniques in biomedical applications, and therefore novel and complementary information. Recently, special attention has been paid to quantitative phase retrieval in X-ray grating interferometry, which is mandatory to perform phase tomography, to achieve material identification, etc. An innovative approach, dubbed ;Reverse Projection; (RP), has been developed for quantitative phase retrieval. The RP method abandons grating scanning completely, and is thus advantageous in terms of higher efficiency and reduced radiation damage. Therefore, it is expected that this novel method would find its potential in preclinical and clinical implementations. Strictly speaking, the reverse projection method is applicable for objects exhibiting only absorption and refraction. In this contribution, we discuss the phase retrieval with the reverse projection method for general objects with absorption, refraction and scattering simultaneously. Especially, we investigate the influence of the object's scattering on the retrieved refraction signal. Both theoretical analysis and numerical experiments are performed. The results show that the retrieved refraction signal is the product of object's refraction and scattering signals for small values. In the case of a strong scattering, the reverse projection method cannot provide reliable phase retrieval. Those presented results will guide the use of the reverse projection method for future practical applications, and help to explain some possible artifacts in the retrieved images and/or reconstructed slices.

  7. Optical full-depth refocusing of 3-D objects based on subdivided-elemental images and local periodic δ-functions in integral imaging.

    PubMed

    Ai, Ling-Yu; Dong, Xiao-Bin; Jang, Jae-Young; Kim, Eun-Soo

    2016-05-16

    We propose a new approach for optical refocusing of three-dimensional (3-D) objects on their real depth without a pickup-range limitation based on subdivided-elemental image arrays (sub-EIAs) and local periodic δ-function arrays (L-PDFAs). The captured EIA from the 3-D objects locating out of the pickup-range, is divided into a number of sub-EIAs depending on the object distance from the lens array. Then, by convolving these sub-EIAs with each L-PDFA whose spatial period corresponds to the specific object's depth, as well as whose size is matched to that of the sub-EIA, arrays of spatially-filtered sub-EIAs (SF-sub-EIAs) for each object depth can be uniquely extracted. From these arrays of SF-sub-EIAs, 3-D objects can be optically reconstructed to be refocused on their real depth. Operational principle of the proposed method is analyzed based on ray-optics. In addition, to confirm the feasibility of the proposed method in the practical application, experiments with test objects are carried out and the results are comparatively discussed with those of the conventional method.

  8. Temporal integration of 3D coherent motion cues defining visual objects of unknown orientation is impaired in amnestic mild cognitive impairment and Alzheimer's disease.

    PubMed

    Lemos, Raquel; Figueiredo, Patrícia; Santana, Isabel; Simões, Mário R; Castelo-Branco, Miguel

    2012-01-01

    The nature of visual impairments in Alzheimer's disease (AD) and their relation with other cognitive deficits remains highly debated. We asked whether independent visual deficits are present in AD and amnestic forms of mild cognitive impairment (MCI) in the absence of other comorbidities by performing a hierarchical analysis of low-level and high-level visual function in MCI and AD. Since parietal structures are a frequent pathophysiological target in AD and subserve 3D vision driven by motion cues, we hypothesized that the parietal visual dorsal stream function is predominantly affected in these conditions. We used a novel 3D task combining three critical variables to challenge parietal function: 3D motion coherence of objects of unknown orientation, with constrained temporal integration of these cues. Groups of amnestic MCI (n = 20), AD (n = 19), and matched controls (n = 20) were studied. Low-level visual function was assessed using psychophysical contrast sensitivity tests probing the magnocellular, parvocellular, and koniocellular pathways. We probed visual ventral stream function using the Benton Face Recognition task. We have found hierarchical visual impairment in AD, independently of neuropsychological deficits, in particular in the novel parietal 3D task, which was selectively affected in MCI. Integration of local motion cues into 3D objects was specifically and most strongly impaired in AD and MCI, especially when 3D motion was unpredictable, with variable orientation and short-lived in space and time. In sum, specific early dorsal stream visual impairment occurs independently of ventral stream, low-level visual and neuropsychological deficits, in amnestic types of MCI and AD.

  9. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    NASA Astrophysics Data System (ADS)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera

  10. Highly localized positive contrast of small paramagnetic objects using 3D center-out radial sampling with off-resonance reception.

    PubMed

    Seevinck, Peter R; de Leeuw, Hendrik; Bos, Clemens; Bakker, Chris J G

    2011-01-01

    In this article, we present a 3D imaging technique, applying center-out RAdial Sampling with Off-Resonance reception, to accurately depict and localize small paramagnetic objects with high positive contrast while suppressing long T(2) (*) components. The center-out RAdial Sampling with Off-Resonance reception imaging technique is a fully frequency-encoded 3D ultrashort echo time acquisition method, which uses a large excitation bandwidth and off-resonance reception. By manually introducing an offset, Δf(0), to the central reception frequency (f(0)), the typical radial signal pileup observed in 3D center-out sampling caused by a dipolar magnetic field disturbance can be shifted toward the source of the field disturbance, resulting in a hyperintense signal at the magnetic center of the small paramagnetic object. This was demonstrated both theoretically and using 1D time domain simulations. Experimental verification was done in a gel phantom and in inhomogeneous porcine tissue containing various objects with very different geometry and susceptibility, namely, subvoxel stainless steel spheres, a puncture needle, and paramagnetic brachytherapy seeds. In all cases, center-out RAdial Sampling with Off-Resonance reception was shown to generate high positive contrast exactly at the location of the paramagnetic object, as was confirmed by X-ray computed tomography. © 2010 Wiley-Liss, Inc.

  11. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  12. Wave propagation and phase retrieval in Fresnel diffraction by a distorted-object approach

    SciTech Connect

    Xiao Xianghui; Shen Qun

    2005-07-15

    An extension of the far-field x-ray diffraction theory is presented by the introduction of a distorted object for calculation of coherent diffraction patterns in the near-field Fresnel regime. It embeds a Fresnel-zone construction on an original object to form a phase-chirped distorted object, which is then Fourier transformed to form a diffraction image. This approach extends the applicability of Fourier-based iterative phasing algorithms into the near-field holographic regime where phase retrieval had been difficult. Simulated numerical examples of this near-field phase retrieval approach indicate its potential applications in high-resolution structural investigations of noncrystalline materials.

  13. Acquisition and Neural Network Prediction of 3D Deformable Object Shape Using a Kinect and a Force-Torque Sensor †

    PubMed Central

    Tawbe, Bilal; Cretu, Ana-Maria

    2017-01-01

    The realistic representation of deformations is still an active area of research, especially for deformable objects whose behavior cannot be simply described in terms of elasticity parameters. This paper proposes a data-driven neural-network-based approach for capturing implicitly and predicting the deformations of an object subject to external forces. Visual data, in the form of 3D point clouds gathered by a Kinect sensor, is collected over an object while forces are exerted by means of the probing tip of a force-torque sensor. A novel approach based on neural gas fitting is proposed to describe the particularities of a deformation over the selectively simplified 3D surface of the object, without requiring knowledge of the object material. An alignment procedure, a distance-based clustering, and inspiration from stratified sampling support this process. The resulting representation is denser in the region of the deformation (an average of 96.6% perceptual similarity with the collected data in the deformed area), while still preserving the object’s overall shape (86% similarity over the entire surface) and only using on average of 40% of the number of vertices in the mesh. A series of feedforward neural networks is then trained to predict the mapping between the force parameters characterizing the interaction with the object and the change in the object shape, as captured by the fitted neural gas nodes. This series of networks allows for the prediction of the deformation of an object when subject to unknown interactions. PMID:28492473

  14. Optometric Measurements Predict Performance but not Comfort on a Virtual Object Placement Task with a Stereoscopic 3D Display

    DTIC Science & Technology

    2014-09-16

    environment, depth perception 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT: SAR 18. NUMBER OF PAGES 29 19a. NAME OF...virtual environment, depth perception 1 Distribution A: Approved for public release; distribution unlimited. 88ABW Cleared 9/9/2013; 88ABW...precision placement of a virtual object in depth at the same location as a target object. Subjective discomfort was assessed using the Simulator Sickness

  15. Extreme ultraviolet tomography using a compact laser-plasma source for 3D reconstruction of low density objects.

    PubMed

    Wachulak, Przemyslaw W; Węgrzyński, Łukasz; Zápražný, Zdenko; Bartnik, Andrzej; Fok, Tomasz; Jarocki, Roman; Kostecki, Jerzy; Szczurek, Miroslaw; Korytár, Dusan; Fiedorowicz, Henryk

    2014-02-01

    A tomographic method for three-dimensional reconstruction of low density objects is presented and discussed. The experiment was performed in the extreme ultraviolet (EUV) spectral region using a desktop system for enhanced optical contrast and employing a compact laser-plasma EUV source, based on a double stream gas puff target. The system allows for volume reconstruction of transient gaseous objects, in this case gas jets, providing additional information for further characterization and optimization. Experimental details and reconstruction results are shown.

  16. Objective 3D surface evaluation of intracranial electrophysiologic correlates of cerebral glucose metabolic abnormalities in children with focal epilepsy.

    PubMed

    Jeong, Jeong-Won; Asano, Eishi; Kumar Pilli, Vinod; Nakai, Yasuo; Chugani, Harry T; Juhász, Csaba

    2017-03-21

    To determine the spatial relationship between 2-deoxy-2[(18) F]fluoro-D-glucose (FDG) metabolic and intracranial electrophysiological abnormalities in children undergoing two-stage epilepsy surgery, statistical parametric mapping (SPM) was used to correlate hypo- and hypermetabolic cortical regions with ictal and interictal electrocorticography (ECoG) changes mapped onto the brain surface. Preoperative FDG-PET scans of 37 children with intractable epilepsy (31 with non-localizing MRI) were compared with age-matched pseudo-normal pediatric control PET data. Hypo-/hypermetabolic maps were transformed to 3D-MRI brain surface to compare the locations of metabolic changes with electrode coordinates of the ECoG-defined seizure onset zone (SOZ) and interictal spiking. While hypometabolic clusters showed a good agreement with the SOZ on the lobar level (sensitivity/specificity = 0.74/0.64), detailed surface-distance analysis demonstrated that large portions of ECoG-defined SOZ and interictal spiking area were located at least 3 cm beyond hypometabolic regions with the same statistical threshold (sensitivity/specificity = 0.18-0.25/0.94-0.90 for overlap 3-cm distance); for a lower threshold, sensitivity for SOZ at 3 cm increased to 0.39 with a modest compromise of specificity. Performance of FDG-PET SPM was slightly better in children with smaller as compared with widespread SOZ. The results demonstrate that SPM utilizing age-matched pseudocontrols can reliably detect the lobe of seizure onset. However, the spatial mismatch between metabolic and EEG epileptiform abnormalities indicates that a more complete SOZ detection could be achieved by extending intracranial electrode coverage at least 3 cm beyond the metabolic abnormality. Considering that the extent of feasible electrode coverage is limited, localization information from other modalities is particularly important to optimize grid coverage in cases of large hypometabolic cortex. Hum Brain Mapp, 2017. © 2017

  17. Perirhinal Cortex Is Necessary for Acquiring, but Not for Retrieving Object-Place Paired Association

    ERIC Educational Resources Information Center

    Jo, Yong Sang; Lee, Inah

    2010-01-01

    Remembering events frequently involves associating objects and their associated locations in space, and it has been implicated that the areas associated with the hippocampus are important in this function. The current study examined the role of the perirhinal cortex in retrieving familiar object-place paired associates, as well as in acquiring…

  18. Perirhinal Cortex Is Necessary for Acquiring, but Not for Retrieving Object-Place Paired Association

    ERIC Educational Resources Information Center

    Jo, Yong Sang; Lee, Inah

    2010-01-01

    Remembering events frequently involves associating objects and their associated locations in space, and it has been implicated that the areas associated with the hippocampus are important in this function. The current study examined the role of the perirhinal cortex in retrieving familiar object-place paired associates, as well as in acquiring…

  19. Transforming clinical imaging and 3D data for virtual reality learning objects: HTML5 and mobile devices implementation.

    PubMed

    Trelease, Robert B; Nieder, Gary L

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android tablets. This article describes complementary methods for creating comparable, multiplatform VR learning objects in the new HTML5 standard format, circumventing platform-specific limitations imposed by the QuickTime VR multimedia file format. Multiple types or "dimensions" of anatomical information can be embedded in such learning objects, supporting different kinds of online learning applications, including interactive atlases, examination questions, and complex, multi-structure presentations. Such HTML5 VR learning objects are usable on new mobile devices that do not support QuickTime VR, as well as on personal computers. Furthermore, HTML5 VR learning objects can be embedded in "ebook" document files, supporting the development of new types of electronic textbooks on mobile devices that are increasingly popular and self-adopted for mobile learning. © 2012 American Association of Anatomists.

  20. Transforming Clinical Imaging and 3D Data for Virtual Reality Learning Objects: HTML5 and Mobile Devices Implementation

    ERIC Educational Resources Information Center

    Trelease, Robert B.; Nieder, Gary L.

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android…

  1. Transforming Clinical Imaging and 3D Data for Virtual Reality Learning Objects: HTML5 and Mobile Devices Implementation

    ERIC Educational Resources Information Center

    Trelease, Robert B.; Nieder, Gary L.

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android…

  2. 3D-Modeling of deformed halite hopper crystals: Object based image analysis and support vector machine, a first evaluation

    NASA Astrophysics Data System (ADS)

    Leitner, Christoph; Hofmann, Peter; Marschallinger, Robert

    2014-05-01

    Halite hopper crystals are thought to develop by displacive growth in unconsolidated mud (Gornitz & Schreiber, 1984). The Alpine Haselgebirge, but also e.g. the salt deposits of the Rhine graben (mined at the beginning of the 20th century), comprise hopper crystals with shapes of cuboids, parallelepipeds and rhombohedrons (Görgey, 1912). Obviously, they deformed under oriented stress, which had been tried to reconstruct with respect to the sedimentary layering (Leitner et al., 2013). In the present work, deformed halite hopper crystals embedded in mudrock were automated reconstructed. Object based image analysis (OBIA) has been used successfully in remote sensing for 2D images before. The present study represents the first time that the method was used for reconstruction of three dimensional geological objects. First, manually a reference (gold standard) was created by redrawing contours of the halite crystals on each HRXCT scanning slice. Then, for OBIA, the computer program eCognition was used. For the automated reconstruction a rule set was developed. Thereby, the strength of OBIA was to recognize all objects similar to halite hopper crystals and in particular to eliminate cracks. In a second step, all the objects unsuitable for a structural deformation analysis were dismissed using a support vector machine (SVM) (clusters, polyhalite-coated crystals and spherical halites) The SVM simultaneously drastically reduced the number of halites. From 184 OBIA-objects 67 well shaped remained, which comes close to the number of pre-selected 52 objects. To assess the accuracy of the automated reconstruction, the result before and after SVM was compared to the reference, i.e. the gold standard. State-of the art per-scene statistics were extended to a per-object statistics. Görgey R (1912) Zur Kenntnis der Kalisalzlager von Wittelsheim im Ober-Elsaß. Tschermaks Mineral Petrogr Mitt 31:339-468 Gornitz VM, Schreiber BC (1981) Displacive halite hoppers from the dead sea

  3. 3D Multi-Object Segmentation of Cardiac MSCT Imaging by using a Multi-Agent Approach

    PubMed Central

    Fleureau, Julien; Garreau, Mireille; Boulmier, Dominique; Hernandez, Alfredo

    2007-01-01

    We propose a new technique for general purpose, semi-interactive and multi-object segmentation in N-dimensional images, applied to the extraction of cardiac structures in MultiSlice Computed Tomography (MSCT) imaging. The proposed approach makes use of a multi-agent scheme combined with a supervised classification methodology allowing the introduction of a priori information and presenting fast computing times. The multi-agent system is organised around a communicating agent which manages a population of situated agents which segment the image through cooperative and competitive interactions. The proposed technique has been tested on several patient data sets. Some typical results are finally presented and discussed. PMID:18003382

  4. Computer-aided laser-optoelectronic OPTEL 3D measurement systems of complex-shaped object geometry

    NASA Astrophysics Data System (ADS)

    Galiulin, Ravil M.; Galiulin, Rishat M.; Bakirov, J. M.; Bogdanov, D. R.; Shulupin, C. O.; Khamitov, D. H.; Khabibullin, M. G.; Pavlov, A. F.; Ryabov, M. S.; Yamaliev, K. N.

    1996-03-01

    Technical characteristics, advantages and applications of automated optoelectronic measuring systems designed at the Regional Interuniversity Optoelectronic Systems Laboratory ('OPTEL') of Ufa State Aviation Technical University are given. The suggested range of systems is the result of the long-term scientific and research experiments, work on design and introduction work. The system can be applied in industrial development and research, in the field of high precision measurement of geometrical parameters in aerospace, robotic, etc., where non-contact and fast measurements of complicated shape objects made of various materials including brittle and plastic articles are required.

  5. 3D multi-object segmentation of cardiac MSCT imaging by using a multi-agent approach.

    PubMed

    Fleureau, Julien; Garreau, Mireille; Boulmier, Dominique; Hernández, Alfredo

    2007-01-01

    We propose a new technique for general purpose, semi-interactive and multi-object segmentation in N-dimensional images, applied to the extraction of cardiac structures in MultiSlice Computed Tomography (MSCT) imaging. The proposed approach makes use of a multi-agent scheme combined with a supervised classification methodology allowing the introduction of a priori information and presenting fast computing times. The multi-agent system is organised around a communicating agent which manages a population of situated agents which segment the image through cooperative and competitive interactions. The proposed technique has been tested on several patient data sets. Some typical results are finally presented and discussed.

  6. Multi-class maximum likelihood symmetry determination and motif reconstruction of 3-D helical objects from projection images for electron microscopy

    PubMed Central

    Lee, Seunghee; Johnson, John E.

    2011-01-01

    Many micro- to nano-scale 3-D biological objects have a helical symmetry. Cryo electron microscopy provides 2-D projection images where, however, the images have low SNR and unknown projection directions. The object is described as a helical array of identical motifs, where both the parameters of the helical symmetry and the motif are unknown. Using a detailed image formation model, a maximum likelihood estimator for the parameters of the symmetry and the 3-D motif based on images of many objects and algorithms for computing the estimate are described. The possibility that the objects are not identical but rather come from a small set of homogeneous classes is included. The first example is based on 316 128×100 pixel experimental images of Tobacco Mosaic Virus, has one class, and achieves 12.40Å spatial resolution in the reconstruction. The second example is based on 400 128 × 128 pixel synthetic images of helical objects constructed from NaK ion channel pore macromolecular complexes, has two classes differing in helical symmetry, and achieves 7.84Å and 7.90Å spatial resolution in the reconstructions for the two classes. PMID:21335314

  7. The Time Course of Name Retrieval during Multiple-Object Naming: Evidence from Extrafoveal-on-Foveal Effects

    ERIC Educational Resources Information Center

    Malpass, Debra; Meyer, Antje S.

    2010-01-01

    The goal of the study was to examine whether speakers naming pairs of objects would retrieve the names of the objects in parallel or in sequence. To this end, we recorded the speakers' eye movements and determined whether the difficulty of retrieving the name of the 2nd object affected the duration of the gazes to the 1st object. Two experiments,…

  8. Identification of physical properties for the retrieval data quality objective process

    SciTech Connect

    Gates, C.M.; Beckette, M.R.

    1995-06-01

    This activity supports the retrieval data quality objective (DQO) process by identifying the material properties that are important to the design, development, and operation of retrieval equipment; the activity also provides justification for characterizing those properties. These properties, which control tank waste behavior during retrieval operations, are also critical to the development of valid physical simulants for designing retrieval equipment. The waste is to be retrieved in a series of four steps. First, a selected retrieval technology breaks up or dislodges the waste into subsequently smaller pieces. Then, the dislodged waste is conveyed out of the tank through the conveyance line. Next, the waste flows into a separator unit that separates the gaseous phase from the liquid and solid phases. Finally, a unit may be present to condition the slurried waste before transporting it to the treatment facility. This document describes the characterization needs for the proposed processes to accomplish waste retrieval. Baseline mobilization technologies include mixer pump technology, sluicing, and high-pressure water-jet cutting. Other processes that are discussed in this document include slurry formation, pneumatic conveyance, and slurry transport. Section 2.0 gives a background of the DQO process and the different retrieval technologies. Section 3.0 provides the mechanistic descriptions and material properties critical to the different technologies and processes. Supplemental information on specific technologies and processes is provided in the appendices. Appendix A contains a preliminary sluicing model, and Appendices B and C cover pneumatic transport and slurry transport, respectively, as prepared for this document. Appendix D contains sample calculations for various equations.

  9. Source retrieval is not properly differentiated from object retrieval in early schizophrenia: an fMRI study using virtual reality.

    PubMed

    Hawco, Colin; Buchy, Lisa; Bodnar, Michael; Izadi, Sarah; Dell'Elce, Jennifer; Messina, Katrina; Joober, Ridha; Malla, Ashok; Lepage, Martin

    2015-01-01

    Source memory, the ability to identify the context in which a memory occurred, is impaired in schizophrenia and has been related to clinical symptoms such as hallucinations. The neurobiological underpinnings of this deficit are not well understood. Twenty-five patients with recent onset schizophrenia (within the first 4.5 years of treatment) and twenty-four healthy controls completed a source memory task. Participants navigated through a 3D virtual city, and had 20 encounters of an object with a person at a place. Functional magnetic resonance imaging was performed during a subsequent forced-choice recognition test. Two objects were presented and participants were asked to either identify which object was seen (new vs. old object recognition), or identify which of the two old objects was associated with either the person or the place being presented (source memory recognition). Source memory was examined by contrasting person or place with object. Both patients and controls demonstrated significant neural activity to source memory relative to object memory, though activity in controls was much more widespread. Group differences were observed in several regions, including the medial parietal and cingulate cortex, lateral frontal lobes and right superior temporal gyrus. Patients with schizophrenia did not differentiate between source and object memory in these regions. Positive correlations with hallucination proneness were observed in the left frontal and right middle temporal cortices and cerebellum. Patients with schizophrenia have a deficit in the neural circuits which facilitate source memory, which may underlie both the deficits in this domain and be related to auditory hallucinations.

  10. Source retrieval is not properly differentiated from object retrieval in early schizophrenia: An fMRI study using virtual reality

    PubMed Central

    Hawco, Colin; Buchy, Lisa; Bodnar, Michael; Izadi, Sarah; Dell'Elce, Jennifer; Messina, Katrina; Joober, Ridha; Malla, Ashok; Lepage, Martin

    2014-01-01

    Source memory, the ability to identify the context in which a memory occurred, is impaired in schizophrenia and has been related to clinical symptoms such as hallucinations. The neurobiological underpinnings of this deficit are not well understood. Twenty-five patients with recent onset schizophrenia (within the first 4.5 years of treatment) and twenty-four healthy controls completed a source memory task. Participants navigated through a 3D virtual city, and had 20 encounters of an object with a person at a place. Functional magnetic resonance imaging was performed during a subsequent forced-choice recognition test. Two objects were presented and participants were asked to either identify which object was seen (new vs. old object recognition), or identify which of the two old objects was associated with either the person or the place being presented (source memory recognition). Source memory was examined by contrasting person or place with object. Both patients and controls demonstrated significant neural activity to source memory relative to object memory, though activity in controls was much more widespread. Group differences were observed in several regions, including the medial parietal and cingulate cortex, lateral frontal lobes and right superior temporal gyrus. Patients with schizophrenia did not differentiate between source and object memory in these regions. Positive correlations with hallucination proneness were observed in the left frontal and right middle temporal cortices and cerebellum. Patients with schizophrenia have a deficit in the neural circuits which facilitate source memory, which may underlie both the deficits in this domain and be related to auditory hallucinations. PMID:25610794

  11. Object-based 3D geomodel with multiple constraints for early Pliocene fan delta in the south of Lake Albert Basin, Uganda

    NASA Astrophysics Data System (ADS)

    Wei, Xu; Lei, Fang; Xinye, Zhang; Pengfei, Wang; Xiaoli, Yang; Xipu, Yang; Jun, Liu

    2017-01-01

    The early Pliocene fan delta complex developed in the south of Lake Albert Basin which is located at the northern end of the western branch in the East African Rift System. The stratigraphy of this succession is composed of distributary channels, overbank, mouthbar and lacustrine shales. Limited by the poor seismic quality and few wells, it is full of challenge to delineate the distribution area and patterns of reservoir sands. Sedimentary forward simulation and basin analogue were applied to analyze the spatial distribution of facies configuration and then a conceptual sedimentary model was constructed by combining with core, heavy mineral and palynology evidences. A 3D geological model of a 120 m thick stratigraphic succession was built using well logs and seismic surfaces based on the established sedimentary model. The facies modeling followed a hierarchical object-based approach conditioned to multiple trend constraints like channel intensity, channel azimuth and channel width. Lacustrine shales were modeled as background facies and then in turn eroded by distribute channels, overbank and mouthbar respectively. At the same time a body facies parameter was created to indicate the connectivity of the reservoir sands. The resultant 3D facies distributions showed that the distributary channels flowed from east bounding fault to west flank and overbank was adhered to the fringe of channels while mouthbar located at the end of channels. Furthermore, porosity and permeability were modeled using sequential Gaussian simulation (SGS) honoring core observations and petrophysical interpretation results. Despite the poor seismic is not supported to give enough information for fan delta sand distribution, creating a truly representative 3D geomodel is still able to be achieved. This paper highlights the integration of various data and comprehensive steps of building a consistent representative 3D geocellular fan delta model used for numeral simulation studies and field

  12. Retrieval of Similar Objects in Simulation Data Using Machine Learning Techniques

    SciTech Connect

    Cantu-Paz, E; Cheung, S-C; Kamath, C

    2003-06-19

    Comparing the output of a physics simulation with an experiment is often done by visually comparing the two outputs. In order to determine which simulation is a closer match to the experiment, more quantitative measures are needed. This paper describes our early experiences with this problem by considering the slightly simpler problem of finding objects in a image that are similar to a given query object. Focusing on a dataset from a fluid mixing problem, we report on our experiments using classification techniques from machine learning to retrieve the objects of interest in the simulation data. The early results reported in this paper suggest that machine learning techniques can retrieve more objects that are similar to the query than distance-based similarity methods.

  13. Impact of assimilation of INSAT-3D retrieved atmospheric motion vectors on short-range forecast of summer monsoon 2014 over the South Asian region

    NASA Astrophysics Data System (ADS)

    Kumar, Prashant; Deb, Sanjib K.; Kishtawal, C. M.; Pal, P. K.

    2017-05-01

    The Weather Research and Forecasting (WRF) model and its three-dimensional variational data assimilation system are used in this study to assimilate the INSAT-3D, a recently launched Indian geostationary meteorological satellite derived from atmospheric motion vectors (AMVs) over the South Asian region during peak Indian summer monsoon month (i.e., July 2014). A total of four experiments were performed daily with and without assimilation of INSAT-3D-derived AMVs and the other AMVs available through Global Telecommunication System (GTS) for the entire month of July 2014. Before assimilating these newly derived INSAT-3D AMVs in the numerical model, a preliminary evaluation of these AMVs is performed with National Centers for Environmental Prediction (NCEP) final model analyses. The preliminary validation results show that root-mean-square vector difference (RMSVD) for INSAT-3D AMVs is ˜3.95, 6.66, and 5.65 ms-1 at low, mid, and high levels, respectively, and slightly more RMSVDs are noticed in GTS AMVs (˜4.0, 8.01, and 6.43 ms-1 at low, mid, and high levels, respectively). The assimilation of AMVs has improved the WRF model of produced wind speed, temperature, and moisture analyses as well as subsequent model forecasts over the Indian Ocean, Arabian Sea, Australia, and South Africa. Slightly more improvements are noticed in the experiment where only the INSAT-3D AMVs are assimilated compared to the experiment where only GTS AMVs are assimilated. The results also show improvement in rainfall predictions over the Indian region after AMV assimilation. Overall, the assimilation of INSAT-3D AMVs improved the WRF model short-range predictions over the South Asian region as compared to control experiments.

  14. 3D workflow for HDR image capture of projection systems and objects for CAVE virtual environments authoring with wireless touch-sensitive devices

    NASA Astrophysics Data System (ADS)

    Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin

    2006-02-01

    A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.

  15. Distinct neuronal interactions in anterior inferotemporal areas of macaque monkeys during retrieval of object association memory.

    PubMed

    Hirabayashi, Toshiyuki; Tamura, Keita; Takeuchi, Daigo; Takeda, Masaki; Koyano, Kenji W; Miyashita, Yasushi

    2014-07-09

    In macaque monkeys, the anterior inferotemporal cortex, a region crucial for object memory processing, is composed of two adjacent, hierarchically distinct areas, TE and 36, for which different functional roles and neuronal responses in object memory tasks have been characterized. However, it remains unknown how the neuronal interactions differ between these areas during memory retrieval. Here, we conducted simultaneous recordings from multiple single-units in each of these areas while monkeys performed an object association memory task and examined the inter-area differences in neuronal interactions during the delay period. Although memory neurons showing sustained activity for the presented cue stimulus, cue-holding (CH) neurons, interacted with each other in both areas, only those neurons in area 36 interacted with another type of memory neurons coding for the to-be-recalled paired associate (pair-recall neurons) during memory retrieval. Furthermore, pairs of CH neurons in area TE showed functional coupling in response to each individual object during memory retention, whereas the same class of neuron pairs in area 36 exhibited a comparable strength of coupling in response to both associated objects. These results suggest predominant neuronal interactions in area 36 during the mnemonic processing, which may underlie the pivotal role of this brain area in both storage and retrieval of object association memory. Copyright © 2014 the authors 0270-6474/14/349377-12$15.00/0.

  16. Classification and segmentation of orbital space based objects against terrestrial distractors for the purpose of finding holes in shape from motion 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Mundhenk, T. Nathan; Flores, Arturo; Hoffman, Heiko

    2013-12-01

    3D reconstruction of objects via Shape from Motion (SFM) has made great strides recently. Utilizing images from a variety of poses, objects can be reconstructed in 3D without knowing a priori the camera pose. These feature points can then be bundled together to create large scale scene reconstructions automatically. A shortcoming of current methods of SFM reconstruction is in dealing with specular or flat low feature surfaces. The inability of SFM to handle these places creates holes in a 3D reconstruction. This can cause problems when the 3D reconstruction is used for proximity detection and collision avoidance by a space vehicle working around another space vehicle. As such, we would like the automatic ability to recognize when a hole in a 3D reconstruction is in fact not a hole, but is a place where reconstruction has failed. Once we know about such a location, methods can be used to try to either more vigorously fill in that region or to instruct a space vehicle to proceed with more caution around that area. Detecting such areas in earth orbiting objects is non-trivial since we need to parse out complex vehicle features from complex earth features, particularly when the observing vehicle is overhead the target vehicle. To do this, we have created a Space Object Classifier and Segmenter (SOCS) hole finder. The general principle we use is to classify image features into three categories (earth, man-made, space). Classified regions are then clustered into probabilistic regions which can then be segmented out. Our categorization method uses an augmentation of a state of the art bag of visual words method for object categorization. This method works by first extracting PHOW (dense SIFT like) features which are computed over an image and then quantized via KD Tree. The quantization results are then binned into histograms and results classified by the PEGASOS support vector machine solver. This gives a probability that a patch in the image corresponds to one of three

  17. Age-related changes in feature-based object memory retrieval as measured by event-related potentials

    PubMed Central

    Chiang, Hsueh-Sheng; Mudar, Raksha A.; Spence, Jeffrey S.; Pudhiyidath, Athula; Eroh, Justin; DeLaRosa, Bambi; Kraut, Michael A.; Hart, John

    2014-01-01

    To investigate neural mechanisms that support semantic functions in aging, we recorded scalp EEG during an object retrieval task in 22 younger and 22 older adults. The task required determining if a particular object could be retrieved when two visual words representing object features were presented. Both age groups had comparable accuracy although response times were longer in older adults. In both groups a left fronto-temporal negative potential occurred at around 750 msec during object retrieval, consistent with previous findings (Brier et al., 2008). Only in older adults a later positive frontal potential was found peaking between 800 and 1000 msec during no retrieval. These findings suggest younger and older adults employ comparable neural mechanisms when features clearly facilitate retrieval of an object memory, but when features yield no retrieval, older adults use additional neural resources to engage in a more effortful and exhaustive search prior to making a decision. PMID:24911552

  18. 3-D visualisation of palaeoseismic trench stratigraphy and trench logging using terrestrial remote sensing and GPR - combining techniques towards an objective multiparametric interpretation

    NASA Astrophysics Data System (ADS)

    Schneiderwind, S.; Mason, J.; Wiatr, T.; Papanikolaou, I.; Reicherter, K.

    2015-09-01

    Two normal faults on the Island of Crete and mainland Greece were studied to create and test an innovative workflow to make palaeoseismic trench logging more objective, and visualise the sedimentary architecture within the trench wall in 3-D. This is achieved by combining classical palaeoseismic trenching techniques with multispectral approaches. A conventional trench log was firstly compared to results of iso cluster analysis of a true colour photomosaic representing the spectrum of visible light. Passive data collection disadvantages (e.g. illumination) were addressed by complementing the dataset with active near-infrared backscatter signal image from t-LiDAR measurements. The multispectral analysis shows that distinct layers can be identified and it compares well with the conventional trench log. According to this, a distinction of adjacent stratigraphic units was enabled by their particular multispectral composition signature. Based on the trench log, a 3-D-interpretation of GPR data collected on the vertical trench wall was then possible. This is highly beneficial for measuring representative layer thicknesses, displacements and geometries at depth within the trench wall. Thus, misinterpretation due to cutting effects is minimised. Sedimentary feature geometries related to earthquake magnitude can be used to improve the accuracy of seismic hazard assessments. Therefore, this manuscript combines multiparametric approaches and shows: (i) how a 3-D visualisation of palaeoseismic trench stratigraphy and logging can be accomplished by combining t-LiDAR and GRP techniques, and (ii) how a multispectral digital analysis can offer additional advantages and a higher objectivity in the interpretation of palaeoseismic and stratigraphic information. The multispectral datasets are stored allowing unbiased input for future (re-)investigations.

  19. A comparison of dimensionality reduction methods for retrieval of similar objects in simulation data

    SciTech Connect

    Cantu-Paz, E; Cheung, S S; Kamath, C

    2003-09-23

    High-resolution computer simulations produce large volumes of data. As a first step in the analysis of these data, supervised machine learning techniques can be used to retrieve objects similar to a query that the user finds interesting. These objects may be characterized by a large number of features, some of which may be redundant or irrelevant to the similarity retrieval problem. This paper presents a comparison of six dimensionality reduction algorithms on data from a fluid mixing simulation. The objective is to identify methods that efficiently find feature subsets that result in high accuracy rates. Our experimental results with single- and multi-resolution data suggest that standard forward feature selection produces the smallest feature subsets in the shortest time.

  20. An ECG storage and retrieval system embedded in client server HIS utilizing object-oriented DB.

    PubMed

    Wang, C; Ohe, K; Sakurai, T; Nagase, T; Kaihara, S

    1996-02-01

    In the University of Tokyo Hospital, the improved client server HIS has been applied to clinical practice and physicians can order prescription, laboratory examination, ECG examination and radiographic examination, etc. directly by themselves and read results of these examinations, except medical signal waves, schema and image, on UNIX workstations. Recently, we designed and developed an ECG storage and retrieval system embedded in the client server HIS utilizing object-oriented database to take the first step in dealing with digitized signal, schema and image data and show waves, graphics, and images directly to physicians by the client server HIS. The system was developed based on object-oriented analysis and design, and implemented with object-oriented database management system (OODMS) and C++ programming language. In this paper, we describe the ECG data model, functions of the storage and retrieval system, features of user interface and the result of its implementation in the HIS.

  1. Combined robotic-aided gait training and 3D gait analysis provide objective treatment and assessment of gait in children and adolescents with Acquired Hemiplegia.

    PubMed

    Molteni, Erika; Beretta, Elena; Altomonte, Daniele; Formica, Francesca; Strazzer, Sandra

    2015-08-01

    To evaluate the feasibility of a fully objective rehabilitative and assessment process of the gait abilities in children suffering from Acquired Hemiplegia (AH), we studied the combined employment of robotic-aided gait training (RAGT) and 3D-Gait Analysis (GA). A group of 12 patients with AH underwent 20 sessions of RAGT in addition to traditional manual physical therapy (PT). All the patients were evaluated before and after the training by using the Gross Motor Function Measures (GMFM), the Functional Assessment Questionnaire (FAQ), and the 6 Minutes Walk Test. They also received GA before and after RAGT+PT. Finally, results were compared with those obtained from a control group of 3 AH children who underwent PT only. After the training, the GMFM and FAQ showed significant improvement in patients receiving RAGT+PT. GA highlighted significant improvement in stance symmetry and step length of the affected limb. Moreover, pelvic tilt increased, and hip kinematics on the sagittal plane revealed statistically significant increase in the range of motion during the hip flex-extension. Our data suggest that the combined program RAGT+PT induces improvements in functional activities and gait pattern in children with AH, and it demonstrates that the combined employment of RAGT and 3D-GA ensures a fully objective rehabilitative program.

  2. Hip2Norm: an object-oriented cross-platform program for 3D analysis of hip joint morphology using 2D pelvic radiographs.

    PubMed

    Zheng, G; Tannast, M; Anderegg, C; Siebenrock, K A; Langlotz, F

    2007-07-01

    We developed an object-oriented cross-platform program to perform three-dimensional (3D) analysis of hip joint morphology using two-dimensional (2D) anteroposterior (AP) pelvic radiographs. Landmarks extracted from 2D AP pelvic radiographs and optionally an additional lateral pelvic X-ray were combined with a cone beam projection model to reconstruct 3D hip joints. Since individual pelvic orientation can vary considerably, a method for standardizing pelvic orientation was implemented to determine the absolute tilt/rotation. The evaluation of anatomically morphologic differences was achieved by reconstructing the projected acetabular rim and the measured hip parameters as if obtained in a standardized neutral orientation. The program had been successfully used to interactively objectify acetabular version in hips with femoro-acetabular impingement or developmental dysplasia. Hip(2)Norm is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway) for graphical user interface (GUI) and is transportable to any platform.

  3. The Time Course of Activation of Object Shape and Shape+Colour Representations during Memory Retrieval

    PubMed Central

    Lloyd-Jones, Toby J.; Roberts, Mark V.; Leek, E. Charles; Fouquet, Nathalie C.; Truchanowicz, Ewa G.

    2012-01-01

    Little is known about the timing of activating memory for objects and their associated perceptual properties, such as colour, and yet this is important for theories of human cognition. We investigated the time course associated with early cognitive processes related to the activation of object shape and object shape+colour representations respectively, during memory retrieval as assessed by repetition priming in an event-related potential (ERP) study. The main findings were as follows: (1) we identified a unique early modulation of mean ERP amplitude during the N1 that was associated with the activation of object shape independently of colour; (2) we also found a subsequent early P2 modulation of mean amplitude over the same electrode clusters associated with the activation of object shape+colour representations; (3) these findings were apparent across both familiar (i.e., correctly coloured – yellow banana) and novel (i.e., incorrectly coloured - blue strawberry) objects; and (4) neither of the modulations of mean ERP amplitude were evident during the P3. Together the findings delineate the timing of object shape and colour memory systems and support the notion that perceptual representations of object shape mediate the retrieval of temporary shape+colour representations for familiar and novel objects. PMID:23155393

  4. Acceleration of the calculation speed of computer-generated holograms using the sparsity of the holographic fringe pattern for a 3D object.

    PubMed

    Kim, Hak Gu; Jeong, Hyunwook; Man Ro, Yong

    2016-10-31

    In computer-generated hologram (CGH) calculations, a diffraction pattern needs to be calculated from all points of a 3-D object, which requires a heavy computational cost. In this paper, we propose a novel fast computer-generated hologram calculation method using sparse fast Fourier transform. The proposed method consists of two steps. First, the sparse dominant signals of CGHs are measured by calculating a wavefront on a virtual plane between the object and the CGH plane. Second, the wavefront on CGH plane is calculated by using the measured sparsity with sparse Fresnel diffraction. Experimental results proved that the proposed method is much faster than existing works while it preserving the visual quality.

  5. 3D data merging using Holoimage

    NASA Astrophysics Data System (ADS)

    Zhang, Song; Yau, Shing-Tung

    2007-09-01

    Three-dimensional data merging is critical for full-field 3-D shape measurement. 3-D range data patches, acquired either from different sensors or from the same sensor in different viewing angles, have to be merged into a single piece to facilitate future data analysis. In this research, we propose a novel method for 3-D data merging using Holoimage. Similar to the 3-D shape measurement system using a phase-shifting method, Holoimage is a phase-shifting-based computer synthesized fringe image. The virtual projector projects the phase-shifted fringe pattern onto the object, the reflected fringe images are rendered on the screen, and the Holoimage is generated by recording the screen. The 3-D information is retrieved from the Holoimage using a phase-shifting method. If two patches of 3-D data with overlapping areas are rendered by OpenGL, the overlapping areas are resolved by the graphics pipeline, i.e., only the front geometry can been visualized. Therefore, the merging is done if the front geometry information can be obtained. Holoimage is to obtain the front geometry by projecting the fringe patterns onto the rendered scene. Unlike real world, the virtual camera and projector can be used as orthogonal projective devices, and the setup of the system can be controlled accurately and easily. Both simulation and experiments demonstrated the success of the proposed method.

  6. A Vision-Based System for Object Identification and Information Retrieval in a Smart Home

    NASA Astrophysics Data System (ADS)

    Grech, Raphael; Monekosso, Dorothy; de Jager, Deon; Remagnino, Paolo

    This paper describes a hand held device developed to assist people to locate and retrieve information about objects in a home. The system developed is a standalone device to assist persons with memory impairments such as people suffering from Alzheimer's disease. A second application is object detection and localization for a mobile robot operating in an ambient assisted living environment. The device relies on computer vision techniques to locate a tagged object situated in the environment. The tag is a 2D color printed pattern with a detection range and a field of view such that the user may point from a distance of over 1 meter.

  7. Percutaneous Retrieval of Misplaced Intravascular Foreign Objects with the Dormia Basket: An Effective Solution

    SciTech Connect

    Sheth, Rahul Someshwar, Vimal; Warawdekar, Gireesh

    2007-02-15

    Purpose. We report our experience of the retrieval of intravascular foreign body objects by the percutaneous use of the Gemini Dormia basket. Methods. Over a period of 2 years we attempted the percutaneous removal of intravascular foreign bodies in 26 patients. Twenty-six foreign bodies were removed: 8 intravascular stents, 4 embolization coils, 9 guidewires, 1 pacemaker lead, and 4 catheter fragments. The percutaneous retrieval was achieved with a combination of guide catheters and the Gemini Dormia basket. Results. Percutaneous retrieval was successful in 25 of 26 patients (96.2%). It was possible to remove all the intravascular foreign bodies with a combination of guide catheters and the Dormia basket. No complication occurred during the procedure, and no long-term complications were registered during the follow-up period, which ranged from 6 months to 32 months (mean 22.4 months overall). Conclusion. Percutaneous retrieval is an effective and safe technique that should be the first choice for removal of an intravascular foreign body.

  8. 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  9. Impaired retrieval of object-colour knowledge with preserved colour naming.

    PubMed

    Luzzatti, C; Davidoff, J

    1994-08-01

    Two cases (G.G. and A.V.) are described of cognitive impairment resulting from herpes simplex infection. Both cases demonstrated anomic disorders and impairments in drawing but only in G.G.'s drawings was there a reliable selective impairment for items from natural categories. Both cases, however, showed an impairment for the retrieval of knowledge concerning the colours of objects. The impairment has, in the past, been ascribed to interference from colour anomia; this was not so for the present cases. For G.G. and A.V., impairments in object-colour retrieval were related to errors in picture naming. More errors were associated with items that induced circumlocutions than to those that were correctly named. The impairment was also present for some items that were named correctly. The patients' impairments are discussed within a model in which object-colour knowledge is functionally situated between an object's shape description and its output phonology but on a separate route from other associated object knowledge.

  10. Comparison of single distance phase retrieval algorithms by considering different object composition and the effect of statistical and structural noise.

    PubMed

    Chen, R C; Rigon, L; Longo, R

    2013-03-25

    Phase retrieval is a technique for extracting quantitative phase information from X-ray propagation-based phase-contrast tomography (PPCT). In this paper, the performance of different single distance phase retrieval algorithms will be investigated. The algorithms are herein called phase-attenuation duality Born Algorithm (PAD-BA), phase-attenuation duality Rytov Algorithm (PAD-RA), phase-attenuation duality Modified Bronnikov Algorithm (PAD-MBA), phase-attenuation duality Paganin algorithm (PAD-PA) and phase-attenuation duality Wu Algorithm (PAD-WA), respectively. They are all based on phase-attenuation duality property and on weak absorption of the sample and they employ only a single distance PPCT data. In this paper, they are investigated via simulated noise-free PPCT data considering the fulfillment of PAD property and weakly absorbing conditions, and with experimental PPCT data of a mixture sample containing absorbing and weakly absorbing materials, and of a polymer sample considering different degrees of statistical and structural noise. The simulation shows all algorithms can quantitatively reconstruct the 3D refractive index of a quasi-homogeneous weakly absorbing object from noise-free PPCT data. When the weakly absorbing condition is violated, the PAD-RA and PAD-PA/WA obtain better result than PAD-BA and PAD-MBA that are shown in both simulation and mixture sample results. When considering the statistical noise, the contrast-to-noise ratio values decreases as the photon number is reduced. The structural noise study shows that the result is progressively corrupted by ring-like artifacts with the increase of structural noise (i.e. phantom thickness). The PAD-RA and PAD-PA/WA gain better density resolution than the PAD-BA and PAD-MBA in both statistical and structural noise study.

  11. Medical 3-D Printing.

    PubMed

    Furlow, Bryant

    2017-05-01

    Three-dimensional printing is used in the manufacturing industry, medical and pharmaceutical research, drug production, clinical medicine, and dentistry, with implications for precision and personalized medicine. This technology is advancing the development of patient-specific prosthetics, stents, splints, and fixation devices and is changing medical education, treatment decision making, and surgical planning. Diagnostic imaging modalities play a fundamental role in the creation of 3-D printed models. Although most 3-D printed objects are rigid, flexible soft-tissue-like prosthetics also can be produced. ©2017 American Society of Radiologic Technologists.

  12. Improving 3d Spatial Queries Search: Newfangled Technique of Space Filling Curves in 3d City Modeling

    NASA Astrophysics Data System (ADS)

    Uznir, U.; Anton, F.; Suhaibah, A.; Rahman, A. A.; Mioc, D.

    2013-09-01

    The advantages of three dimensional (3D) city models can be seen in various applications including photogrammetry, urban and regional planning, computer games, etc.. They expand the visualization and analysis capabilities of Geographic Information Systems on cities, and they can be developed using web standards. However, these 3D city models consume much more storage compared to two dimensional (2D) spatial data. They involve extra geometrical and topological information together with semantic data. Without a proper spatial data clustering method and its corresponding spatial data access method, retrieving portions of and especially searching these 3D city models, will not be done optimally. Even though current developments are based on an open data model allotted by the Open Geospatial Consortium (OGC) called CityGML, its XML-based structure makes it challenging to cluster the 3D urban objects. In this research, we propose an opponent data constellation technique of space-filling curves (3D Hilbert curves) for 3D city model data representation. Unlike previous methods, that try to project 3D or n-dimensional data down to 2D or 3D using Principal Component Analysis (PCA) or Hilbert mappings, in this research, we extend the Hilbert space-filling curve to one higher dimension for 3D city model data implementations. The query performance was tested using a CityGML dataset of 1,000 building blocks and the results are presented in this paper. The advantages of implementing space-filling curves in 3D city modeling will improve data retrieval time by means of optimized 3D adjacency, nearest neighbor information and 3D indexing. The Hilbert mapping, which maps a subinterval of the [0, 1] interval to the corresponding portion of the d-dimensional Hilbert's curve, preserves the Lebesgue measure and is Lipschitz continuous. Depending on the applications, several alternatives are possible in order to cluster spatial data together in the third dimension compared to its

  13. Retrieval by a patient with apraxia of sensorimotor information from visually presented objects.

    PubMed

    Kobayakawa, Mutsutaka; Ohigashi, Yoshitaka

    2007-06-01

    Motor representations are reported to be implicitly evoked when one observes manipulatable objects (action potentiation). The relationship was examined between action potentiation and pantomime deficit in apraxia. Participants responded to line drawings of manipulatable objects with either the left or right hand, according to the color of the stimulus. In normal participants (N= 10, four women, six men, M age = 28.5 yr., SD = 5.6), responses were faster when the orientation of the stimulus was compatible with the response-hand grasp. However, the apraxic patient did not exhibit this compatibility effect. On a control task in which a nonobject (circle) was presented, all participants exhibited the compatibility effect. These results indicated that the apraxic patient was impaired in evoking motor representation associated with objects. Thus, in some cases, apraxic disorders may be attributable to a deficit in retrieving object-specific information for manipulation.

  14. Assimilation of OMI NO2 retrievals into the limited-area chemistry-transport model DEHM (V2009.0) with a 3-D OI algorithm

    NASA Astrophysics Data System (ADS)

    Silver, J. D.; Brandt, J.; Hvidberg, M.; Frydendall, J.; Christensen, J. H.

    2013-01-01

    Data assimilation is the process of combining real-world observations with a modelled geophysical field. The increasing abundance of satellite retrievals of atmospheric trace gases makes chemical data assimilation an increasingly viable method for deriving more accurate analysed fields and initial conditions for air quality forecasts. We implemented a three-dimensional optimal interpolation (OI) scheme to assimilate retrievals of NO2 tropospheric columns from the Ozone Monitoring Instrument into the Danish Eulerian Hemispheric Model (DEHM, version V2009.0), a three-dimensional, regional-scale, offline chemistry-transport model. The background error covariance matrix, B, was estimated based on differences in the NO2 concentration field between paired simulations using different meteorological inputs. Background error correlations were modelled as non-separable, horizontally homogeneous and isotropic. Parameters were estimated for each month and for each hour to allow for seasonal and diurnal patterns in NO2 concentrations. Three experiments were run to compare the effects of observation thinning and the choice of observation errors. Model performance was assessed by comparing the analysed fields to an independent set of observations: ground-based measurements from European air-quality monitoring stations. The analysed NO2 and O3 concentrations were more accurate than those from a reference simulation without assimilation, with increased temporal correlation for both species. Thinning of satellite data and the use of constant observation errors yielded a better balance between the observed increments and the prescribed error covariances, with no appreciable degradation in the surface concentrations due to the observation thinning. Forecasts were also considered and these showed rather limited influence from the initial conditions once the effects of the diurnal cycle are accounted for. The simple OI scheme was effective and computationally feasible in this context

  15. Semantic memory retrieval: cortical couplings in object recognition in the N400 window.

    PubMed

    Supp, Gernot G; Schlögl, Alois; Fiebach, Christian J; Gunter, Thomas C; Vigliocco, Gabriella; Pfurtscheller, Gert; Petsche, Hellmuth

    2005-02-01

    To characterize the regional changes in neuronal couplings and information transfer related to semantic aspects of object recognition in humans we used partial-directed EEG-coherence analysis (PDC). We examined the differences of processing recognizable and unrecognizable pictures as reflected by changes in cortical networks within the time-window of a determined event-related potential (ERP) component, namely the N400. Fourteen participants performed an image recognition task, while sequentially confronted with pictures of recognizable and unrecognizable objects. The time-window of N400 as indicative of object semantics was defined from the ERP. Differences of PDC in the beta-band between these tasks were represented topographically as patterns of electrical couplings, possibly indicating changing degrees of functional cooperation between brain areas. Successful memory retrieval of picture meaning appears to be supported by networks comprising left temporal and parietal regions and bilateral frontal brain areas.

  16. A framework for inverse planning of beam-on times for 3D small animal radiotherapy using interactive multi-objective optimisation

    NASA Astrophysics Data System (ADS)

    Balvert, Marleen; van Hoof, Stefan J.; Granton, Patrick V.; Trani, Daniela; den Hertog, Dick; Hoffmann, Aswin L.; Verhaegen, Frank

    2015-07-01

    Advances in precision small animal radiotherapy hardware enable the delivery of increasingly complicated dose distributions on the millimeter scale. Manual creation and evaluation of treatment plans becomes difficult or even infeasible with an increasing number of degrees of freedom for dose delivery and available image data. The goal of this work is to develop an optimisation model that determines beam-on times for a given beam configuration, and to assess the feasibility and benefits of an automated treatment planning system for small animal radiotherapy. The developed model determines a Pareto optimal solution using operator-defined weights for a multiple-objective treatment planning problem. An interactive approach allows the planner to navigate towards, and to select the Pareto optimal treatment plan that yields the most preferred trade-off of the conflicting objectives. This model was evaluated using four small animal cases based on cone-beam computed tomography images. Resulting treatment plan quality was compared to the quality of manually optimised treatment plans using dose-volume histograms and metrics. Results show that the developed framework is well capable of optimising beam-on times for 3D dose distributions and offers several advantages over manual treatment plan optimisation. For all cases but the simple flank tumour case, a similar amount of time was needed for manual and automated beam-on time optimisation. In this time frame, manual optimisation generates a single treatment plan, while the inverse planning system yields a set of Pareto optimal solutions which provides quantitative insight on the sensitivity of conflicting objectives. Treatment planning automation decreases the dependence on operator experience and allows for the use of class solutions for similar treatment scenarios. This can shorten the time required for treatment planning and therefore increase animal throughput. In addition, this can improve treatment standardisation and

  17. Common and differential electrophysiological mechanisms underlying semantic object memory retrieval probed by features presented in different stimulus types.

    PubMed

    Chiang, Hsueh-Sheng; Eroh, Justin; Spence, Jeffrey S; Motes, Michael A; Maguire, Mandy J; Krawczyk, Daniel C; Brier, Matthew R; Hart, John; Kraut, Michael A

    2016-08-01

    How the brain combines the neural representations of features that comprise an object in order to activate a coherent object memory is poorly understood, especially when the features are presented in different modalities (visual vs. auditory) and domains (verbal vs. nonverbal). We examined this question using three versions of a modified Semantic Object Retrieval Test, where object memory was probed by a feature presented as a written word, a spoken word, or a picture, followed by a second feature always presented as a visual word. Participants indicated whether each feature pair elicited retrieval of the memory of a particular object. Sixteen subjects completed one of the three versions (N=48 in total) while their EEG were recorded simultaneously. We analyzed EEG data in four separate frequency bands (delta: 1-4Hz, theta: 4-7Hz; alpha: 8-12Hz; beta: 13-19Hz) using a multivariate data-driven approach. We found that alpha power time-locked to response was modulated by both cross-modality (visual vs. auditory) and cross-domain (verbal vs. nonverbal) probing of semantic object memory. In addition, retrieval trials showed greater changes in all frequency bands compared to non-retrieval trials across all stimulus types in both response-locked and stimulus-locked analyses, suggesting dissociable neural subcomponents involved in binding object features to retrieve a memory. We conclude that these findings support both modality/domain-dependent and modality/domain-independent mechanisms during semantic object memory retrieval.

  18. The Semantic Object Retrieval Test (SORT) in normal aging and Alzheimer disease.

    PubMed

    Kraut, Michael A; Cherry, Barbara; Pitcock, Jeffery A; Vestal, Lindsey; Henderson, Victor W; Hart, John

    2006-12-01

    To characterize performance on a test of semantic object retrieval (Semantic Object Retrieval Test-SORT) in healthy, elderly subjects and patients with Alzheimer disease (AD). Although the initial presentation of patients with AD often reflects impairment in delayed recall for verbally encoded memory, common complaints of patients with early AD are actually related to semantic memory impairment. Thirty-eight AD patients and 121 healthy aging controls enrolled in an Alzheimer's Disease Center received a battery of standard neuropsychologic tests including the SORT. Compared with normal controls, AD patients had SORT memory impairments with significantly more false positive memory errors, fewer correctly produced names, and more substitutions in the name production aspect of the test. SORT had robust test-retest reliability in normals. The SORT task provides a direct, specific assessment of semantic memory, and has now been administered to 121 healthy, aging controls for normative ranges of performance, and to AD patients. The task detected semantic memory deficits in approximately half of patients with mild-moderate AD, which is comparable to other studies assessing semantic deficits in AD with less specific measures.

  19. Dusty: an assistive mobile manipulator that retrieves dropped objects for people with motor impairments

    PubMed Central

    King, Chih-Hung; Chen, Tiffany L; Fan, Zhengqin; Glass, Jonathan D; Kemp, Charles C

    2012-01-01

    People with physical disabilities have ranked object retrieval as a high priority task for assistive robots. We have developed Dusty, a teleoperated mobile manipulator that fetches objects from the floor and delivers them to users at a comfortable height. In this paper, we first demonstrate the robot's high success rate (98.4%) when autonomously grasping 25 objects considered important by people with amyotrophic lateral sclerosis (ALS). We tested the robot with each object in five different configurations on five types of flooring. We then present the results of an experiment in which 20 people with ALS operated Dusty. Participants teleoperated Dusty to move around an obstacle, pick up an object, and deliver the object to themselves. They successfully completed this task in 59 out of 60 trials (3 trials each) with a mean completion time of 61.4 seconds (SD=20.5 seconds), and reported high overall satisfaction using Dusty (7-point Likert scale; 6.8 SD=0.6). Participants rated Dusty to be significantly easier to use than their own hands, asking family members, and using mechanical reachers (p < 0.03, paired t-tests). 14 of the 20 participants reported that they would prefer using Dusty over their current methods. PMID:22013888

  20. Dusty: an assistive mobile manipulator that retrieves dropped objects for people with motor impairments.

    PubMed

    King, Chih-Hung; Chen, Tiffany L; Fan, Zhengqin; Glass, Jonathan D; Kemp, Charles C

    2012-03-01

    People with physical disabilities have ranked object retrieval as a high-priority task for assistive robots. We have developed Dusty, a teleoperated mobile manipulator that fetches objects from the floor and delivers them to users at a comfortable height. In this paper, we first demonstrate the robot's high success rate (98.4%) when autonomously grasping 25 objects considered being important by people with amyotrophic lateral sclerosis (ALS). We tested the robot with each object in five different configurations on five types of flooring. We then present the results of an experiment in which 20 people with ALS operated Dusty. Participants teleoperated Dusty to move around an obstacle, pick up an object and deliver the object to themselves. They successfully completed this task in 59 out of 60 trials (3 trials each) with a mean completion time of 61.4 SD = 20.5 seconds), and reported high overall satisfaction using Dusty (7-point Likert scale; 6.8 SD = 0.6). Participants rated Dusty to be significantly easier to use than their own hands, asking family members, and using mechanical reachers (p < 0.03, paired t-tests). Fourteen of the 20 participants reported that they would prefer using Dusty over their current methods. [Box: see text].

  1. Direct single-shot phase retrieval for separated objects (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Leshem, Ben; Xu, Rui; Miao, Jianwei; Nadler, Boaz; Oron, Dan; Dudovich, Nirit; Raz, Oren

    2016-03-01

    The phase retrieval problem arises in various fields ranging from physics and astronomy to biology and microscopy. Computational reconstruction of the Fourier phase from a single diffraction pattern is typically achieved using iterative alternating projections algorithms imposing a non-convex computational challenge. A different approach is holography, relying on a known reference field. Here we present a conceptually new approach for the reconstruction of two (or more) sufficiently separated objects. In our approach we combine the constraint the objects are finite as well as the information in the interference between them to construct an overdetermined set of linear equations. We show that this set of equations is guaranteed to yield the correct solution almost always and that it can be solved efficiently by standard numerical algebra tools. Essentially, our method combine commonly used constraint (that the object is finite) with a holographic approach (interference information). It differs from holographic methods in the fact that a known reference field is not required, instead the unknown objects serve as reference to one another (hence blind holography). Our method can be applied in a single-shot for two (or more) separated objects or with several measurements with a single object. It can benefit phase imaging techniques such as Fourier phytography microscopy, as well as coherent diffractive X-ray imaging in which the generation of a well-characterized, high resolution reference beam imposes a major challenge. We demonstrate our method experimentally both in the optical domain and in the X-ray domain using XFEL pulses.

  2. 3D differential phase contrast microscopy

    PubMed Central

    Chen, Michael; Tian, Lei; Waller, Laura

    2016-01-01

    We demonstrate 3D phase and absorption recovery from partially coherent intensity images captured with a programmable LED array source. Images are captured through-focus with four different illumination patterns. Using first Born and weak object approximations (WOA), a linear 3D differential phase contrast (DPC) model is derived. The partially coherent transfer functions relate the sample’s complex refractive index distribution to intensity measurements at varying defocus. Volumetric reconstruction is achieved by a global FFT-based method, without an intermediate 2D phase retrieval step. Because the illumination is spatially partially coherent, the transverse resolution of the reconstructed field achieves twice the NA of coherent systems and improved axial resolution. PMID:27867705

  3. Mental rotation of objects retrieved from memory: a functional MRI study of spatial processing.

    PubMed

    Just, M A; Carpenter, P A; Maguire, M; Diwadkar, V; McMains, S

    2001-09-01

    This functional MRI study examined how people mentally rotate a 3-dimensional object (an alarm clock) that is retrieved from memory and rotated according to a sequence of auditory instructions. We manipulated the geometric properties of the rotation, such as having successive rotation steps around a single axis versus alternating between 2 axes. The latter condition produced much more activation in several areas. Also, the activation in several areas increased with the number of rotation steps. During successive rotations around a single axis, the activation was similar for rotations in the picture plane and rotations in depth. The parietal (but not extrastriate) activation was similar to mental rotation of a visually presented object. The findings indicate that a large-scale cortical network computes different types of spatial information by dynamically drawing on each of its components to a differential, situation-specific degree.

  4. Object and proper name retrieval in temporal lobe epilepsy: a study of difficulties and latencies.

    PubMed

    Condret-Santi, Valérie; Barragan-Jason, Gladys; Valton, Luc; Denuelle, Marie; Curot, Jonathan; Nespoulous, Jean-Luc; Barbeau, Emmanuel J

    2014-12-01

    Retrieving a specific name is sometimes difficult and can be even harder when pathology affects the temporal lobes. Word finding difficulties have been well documented in temporal lobe epilepsy (TLE) but analyses have mostly concentrated on the study of accuracy. Our aim here was to go beyond simple accuracy and to provide both a quantitative and a qualitative assessment of naming difficulties and latencies in patients with TLE. Thirty-two patients with temporal lobe epilepsy (16 with epilepsy affecting the cerebral hemisphere dominant for language (D-TLE) and 16 with epilepsy affecting the cerebral hemisphere non-dominant for language (ND-TLE)) and 34 healthy matched control subjects were included in the study. The experiment involved naming 70 photographs of objects and 70 photographs of celebrities as fast as possible. Accuracy and naming reaction times were recorded. Following each trial, a questionnaire was used to determine the specific nature of each subject's difficulty in retrieving the name (e.g., no difficulty, paraphasia, tip of the tongue, feeling of knowing the name, etc). Reaction times were analysed both across subjects and across trials. D-TLE patients showed consistent and quasi-systematic impairment compared to matched control subjects on both object and famous people naming. This impairment was characterized not only by lower accuracy but also by more qualitative errors and tip of the tongue phenomena. Furthermore, minimum reaction times were slowed down by about 70 ms for objects and 150 ms for famous people naming. In contrast, patients with ND-TLE were less impaired, and their impairment was limited to object naming. These results suggest that patients with TLE, in particular D-TLE, show a general impairment of lexical access. Furthermore, there was evidence of subtle difficulties (increased reaction times) in patients with TLE. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Perception of 3D spatial relations for 3D displays

    NASA Astrophysics Data System (ADS)

    Rosen, Paul; Pizlo, Zygmunt; Hoffmann, Christoph; Popescu, Voicu S.

    2004-05-01

    We test perception of 3D spatial relations in 3D images rendered by a 3D display (Perspecta from Actuality Systems) and compare it to that of a high-resolution flat panel display. 3D images provide the observer with such depth cues as motion parallax and binocular disparity. Our 3D display is a device that renders a 3D image by displaying, in rapid succession, radial slices through the scene on a rotating screen. The image is contained in a glass globe and can be viewed from virtually any direction. In the psychophysical experiment several families of 3D objects are used as stimuli: primitive shapes (cylinders and cuboids), and complex objects (multi-story buildings, cars, and pieces of furniture). Each object has at least one plane of symmetry. On each trial an object or its "distorted" version is shown at an arbitrary orientation. The distortion is produced by stretching an object in a random direction by 40%. This distortion must eliminate the symmetry of an object. The subject's task is to decide whether or not the presented object is distorted under several viewing conditions (monocular/binocular, with/without motion parallax, and near/far). The subject's performance is measured by the discriminability d', which is a conventional dependent variable in signal detection experiments.

  6. LABRADOR: a learning autonomous behavior-based robot for adaptive detection and object retrieval

    NASA Astrophysics Data System (ADS)

    Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan

    2013-01-01

    As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.

  7. Improving Nearest Neighbour Search in 3d Spatial Access Method

    NASA Astrophysics Data System (ADS)

    Suhaibaha, A.; Rahman, A. A.; Uznir, U.; Anton, F.; Mioc, D.

    2016-10-01

    Nearest Neighbour (NN) is one of the important queries and analyses for spatial application. In normal practice, spatial access method structure is used during the Nearest Neighbour query execution to retrieve information from the database. However, most of the spatial access method structures are still facing with unresolved issues such as overlapping among nodes and repetitive data entry. This situation will perform an excessive Input/Output (IO) operation which is inefficient for data retrieval. The situation will become more crucial while dealing with 3D data. The size of 3D data is usually large due to its detail geometry and other attached information. In this research, a clustered 3D hierarchical structure is introduced as a 3D spatial access method structure. The structure is expected to improve the retrieval of Nearest Neighbour information for 3D objects. Several tests are performed in answering Single Nearest Neighbour search and k Nearest Neighbour (kNN) search. The tests indicate that clustered hierarchical structure is efficient in handling Nearest Neighbour query compared to its competitor. From the results, clustered hierarchical structure reduced the repetitive data entry and the accessed page. The proposed structure also produced minimal Input/Output operation. The query response time is also outperformed compared to the other competitor. For future outlook of this research several possible applications are discussed and summarized.

  8. Using the Flow-3D General Moving Object Model to Simulate Coupled Liquid Slosh - Container Dynamics on the SPHERES Slosh Experiment: Aboard the International Space Station

    NASA Technical Reports Server (NTRS)

    Schulman, Richard; Kirk, Daniel; Marsell, Brandon; Roth, Jacob; Schallhorn, Paul

    2013-01-01

    The SPHERES Slosh Experiment (SSE) is a free floating experimental platform developed for the acquisition of long duration liquid slosh data aboard the International Space Station (ISS). The data sets collected will be used to benchmark numerical models to aid in the design of rocket and spacecraft propulsion systems. Utilizing two SPHERES Satellites, the experiment will be moved through different maneuvers designed to induce liquid slosh in the experiment's internal tank. The SSE has a total of twenty-four thrusters to move the experiment. In order to design slosh generating maneuvers, a parametric study with three maneuvers types was conducted using the General Moving Object (GMO) model in Flow-30. The three types of maneuvers are a translation maneuver, a rotation maneuver and a combined rotation translation maneuver. The effectiveness of each maneuver to generate slosh is determined by the deviation of the experiment's trajectory as compared to a dry mass trajectory. To fully capture the effect of liquid re-distribution on experiment trajectory, each thruster is modeled as an independent force point in the Flow-3D simulation. This is accomplished by modifying the total number of independent forces in the GMO model from the standard five to twenty-four. Results demonstrate that the most effective slosh generating maneuvers for all motions occurs when SSE thrusters are producing the highest changes in SSE acceleration. The results also demonstrate that several centimeters of trajectory deviation between the dry and slosh cases occur during the maneuvers; while these deviations seem small, they are measureable by SSE instrumentation.

  9. Age-related changes in feature-based object memory retrieval as measured by event-related potentials.

    PubMed

    Chiang, Hsueh-Sheng; Mudar, Raksha A; Spence, Jeffrey S; Pudhiyidath, Athula; Eroh, Justin; DeLaRosa, Bambi; Kraut, Michael A; Hart, John

    2014-07-01

    To investigate neural mechanisms that support semantic functions in aging, we recorded scalp EEG during an object retrieval task in 22 younger and 22 older adults. The task required determining if a particular object could be retrieved when two visual words representing object features were presented. Both age groups had comparable accuracy although response times were longer in older adults. In both groups a left fronto-temporal negative potential occurred at around 750ms during object retrieval, consistent with previous findings (Brier, Maguire, Tillman, Hart, & Kraut, 2008). In only older adults, a later positive frontal potential was found peaking between 800 and 1000ms during no retrieval. These findings suggest younger and older adults employ comparable neural mechanisms when features clearly facilitate retrieval of an object memory, but when features yield no retrieval, older adults use additional neural resources to engage in a more effortful and exhaustive search prior to making a decision. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. 3D and beyond

    NASA Astrophysics Data System (ADS)

    Fung, Y. C.

    1995-05-01

    This conference on physiology and function covers a wide range of subjects, including the vasculature and blood flow, the flow of gas, water, and blood in the lung, the neurological structure and function, the modeling, and the motion and mechanics of organs. Many technologies are discussed. I believe that the list would include a robotic photographer, to hold the optical equipment in a precisely controlled way to obtain the images for the user. Why are 3D images needed? They are to achieve certain objectives through measurements of some objects. For example, in order to improve performance in sports or beauty of a person, we measure the form, dimensions, appearance, and movements.

  11. Object-oriented analysis and design of an ECG storage and retrieval system integrated with an HIS.

    PubMed

    Wang, C; Ohe, K; Sakurai, T; Nagase, T; Kaihara, S

    1996-03-01

    For a hospital information system, object-oriented methodology plays an increasingly important role, especially for the management of digitized data, e.g., the electrocardiogram, electroencephalogram, electromyogram, spirogram, X-ray, CT and histopathological images, which are not yet computerized in most hospitals. As a first step in an object-oriented approach to hospital information management and storing medical data in an object-oriented database, we connected electrocardiographs to a hospital network and established the integration of ECG storage and retrieval systems with a hospital information system. In this paper, the object-oriented analysis and design of the ECG storage and retrieval systems is reported.

  12. An analysis of TA-Student Interaction and the Development of Concepts in 3-d Space Through Language, Objects, and Gesture in a College-level Geoscience Laboratory

    NASA Astrophysics Data System (ADS)

    King, S. L.

    2015-12-01

    The purpose of this study is twofold: 1) to describe how a teaching assistant (TA) in an undergraduate geology laboratory employs a multimodal system in order to mediate the students' understanding of scientific knowledge and develop a contextualization of a concept in three-dimensional space and 2) to describe how a linguistic awareness of gestural patterns can be used to inform TA training assessment of students' conceptual understanding in situ. During the study the TA aided students in developing the conceptual understanding and reconstruction of a meteoric impact, which produces shatter cone formations. The concurrent use of speech, gesture, and physical manipulation of objects is employed by the TA in order to aid the conceptual understanding of this particular phenomenon. Using the methods of gestural analysis in works by Goldin-Meadow, 2000 and McNeill, 1992, this study describes the gestures of the TA and the students as well as the purpose and motivation of the meditational strategies employed by TA in order to build the geological concept in the constructed 3-dimensional space. Through a series of increasingly complex gestures, the TA assists the students to construct the forensic concept of the imagined 3-D space, which can then be applied to a larger context. As the TA becomes more familiar with the students' meditational needs, the TA adapts teaching and gestural styles to meet their respective ZPDs (Vygotsky 1978). This study shows that in the laboratory setting language, gesture, and physical manipulation of the experimental object are all integral to the learning and demonstration of scientific concepts. Recognition of the gestural patterns of the students allows the TA the ability to dynamically assess the students understanding of a concept. Using the information from this example of student-TA interaction, a brief short course has been created to assist TAs in recognizing the mediational power as well as the assessment potential of gestural

  13. Evaluation of the effects of 3D diffusion, crystal geometry, and initial conditions on retrieved time-scales from Fe-Mg zoning in natural oriented orthopyroxene crystals

    NASA Astrophysics Data System (ADS)

    Krimer, Daniel; Costa, Fidel

    2017-01-01

    Volcano petrologists and geochemists increasingly use time-scale determinations of magmatic processes from modeling the chemical zoning patterns in crystals. Most determinations are done using one-dimensional traverses across a two-dimensional crystal section. However, crystals are three-dimensional objects with complex shapes, and diffusion and re-equilibration occurs in multiple dimensions. Given that we can mainly study the crystals in two-dimensional petrographic thin sections, the determined time-scales could be in error if multiple dimensional and geometrical effects are not identified and accounted for. Here we report the results of a numerical study where we investigate the role of multiple dimensions, geometry, and initial conditions of Fe-Mg diffusion in an orthopyroxene crystal with the view towards proper determinations of time scales from modeling natural crystals. We found that merging diffusion fronts (i.e. diffusion from multiple directions) causes 'additional' diffusion that has the greatest influence close to the crystal's corners (i.e. where two crystal faces meet), and with longer times the affected area widens. We also found that the one-dimensional traverses that can lead to the most accurate calculated time-scales from natural crystals are along the b- crystallographic axis on the ab-plane when model inputs (concentration and zoning geometry) are taken as measured (rather than inferred from other observations). More specifically, accurate time-scales are obtained if the compositional traverses are highly symmetrical and contain a concentration plateau measured through the crystal center. On the other hand, for two-dimensional models the ab- and ac-planes are better suited if the initial (pre-diffusion) concentration and zoning geometry inputs are known or can be estimated, although these are a priory unknown, and thus, may be difficult to use in practical terms. We also found that under certain conditions, a combined one-dimensional and two

  14. Polarimetric 3D integral imaging in photon-starved conditions.

    PubMed

    Carnicer, Artur; Javidi, Bahram

    2015-03-09

    We develop a method for obtaining 3D polarimetric integral images from elemental images recorded in low light illumination conditions. Since photon-counting images are very sparse, calculation of the Stokes parameters and the degree of polarization should be handled carefully. In our approach, polarimetric 3D integral images are generated using the Maximum Likelihood Estimation and subsequently reconstructed by means of a Total Variation Denoising filter. In this way, polarimetric results are comparable to those obtained in conventional illumination conditions. We also show that polarimetric information retrieved from photon starved images can be used in 3D object recognition problems. To the best of our knowledge, this is the first report on 3D polarimetric photon counting integral imaging.

  15. Direct single-shot phase retrieval from the diffraction pattern of separated objects

    DOE PAGES

    Leshem, Ben; Xu, Rui; Dallal, Yehonatan; ...

    2016-02-22

    The non-crystallographic phase problem arises in numerous scientific and technological fields. An important application is coherent diffractive imaging. Recent advances in X-ray free-electron lasers allow capturing of the diffraction pattern from a single nanoparticle before it disintegrates, in so-called ‘diffraction before destruction’ experiments. Presently, the phase is reconstructed by iterative algorithms, imposing a non-convex computational challenge, or by Fourier holography, requiring a well-characterized reference field. Here we present a convex scheme for single-shot phase retrieval for two (or more) sufficiently separated objects, demonstrated in two dimensions. In our approach, the objects serve as unknown references to one another, reducing themore » phase problem to a solvable set of linear equations. We establish our method numerically and experimentally in the optical domain and demonstrate a proof-of-principle single-shot coherent diffractive imaging using X-ray free-electron lasers pulses. Lastly, our scheme alleviates several limitations of current methods, offering a new pathway towards direct reconstruction of complex objects.« less

  16. Direct single-shot phase retrieval from the diffraction pattern of separated objects

    PubMed Central

    Leshem, Ben; Xu, Rui; Dallal, Yehonatan; Miao, Jianwei; Nadler, Boaz; Oron, Dan; Dudovich, Nirit; Raz, Oren

    2016-01-01

    The non-crystallographic phase problem arises in numerous scientific and technological fields. An important application is coherent diffractive imaging. Recent advances in X-ray free-electron lasers allow capturing of the diffraction pattern from a single nanoparticle before it disintegrates, in so-called ‘diffraction before destruction' experiments. Presently, the phase is reconstructed by iterative algorithms, imposing a non-convex computational challenge, or by Fourier holography, requiring a well-characterized reference field. Here we present a convex scheme for single-shot phase retrieval for two (or more) sufficiently separated objects, demonstrated in two dimensions. In our approach, the objects serve as unknown references to one another, reducing the phase problem to a solvable set of linear equations. We establish our method numerically and experimentally in the optical domain and demonstrate a proof-of-principle single-shot coherent diffractive imaging using X-ray free-electron lasers pulses. Our scheme alleviates several limitations of current methods, offering a new pathway towards direct reconstruction of complex objects. PMID:26899582

  17. Direct single-shot phase retrieval from the diffraction pattern of separated objects

    SciTech Connect

    Leshem, Ben; Xu, Rui; Dallal, Yehonatan; Miao, Jianwei; Nadler, Boaz; Oron, Dan; Dudovich, Nirit; Raz, Oren

    2016-02-22

    The non-crystallographic phase problem arises in numerous scientific and technological fields. An important application is coherent diffractive imaging. Recent advances in X-ray free-electron lasers allow capturing of the diffraction pattern from a single nanoparticle before it disintegrates, in so-called ‘diffraction before destruction’ experiments. Presently, the phase is reconstructed by iterative algorithms, imposing a non-convex computational challenge, or by Fourier holography, requiring a well-characterized reference field. Here we present a convex scheme for single-shot phase retrieval for two (or more) sufficiently separated objects, demonstrated in two dimensions. In our approach, the objects serve as unknown references to one another, reducing the phase problem to a solvable set of linear equations. We establish our method numerically and experimentally in the optical domain and demonstrate a proof-of-principle single-shot coherent diffractive imaging using X-ray free-electron lasers pulses. Lastly, our scheme alleviates several limitations of current methods, offering a new pathway towards direct reconstruction of complex objects.

  18. The use of a low-cost visible light 3D scanner to create virtual reality environment models of actors and objects

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2015-05-01

    A low-cost 3D scanner has been developed with a parts cost of approximately USD $5,000. This scanner uses visible light sensing to capture both structural as well as texture and color data of a subject. This paper discusses the use of this type of scanner to create 3D models for incorporation into a virtual reality environment. It describes the basic scanning process (which takes under a minute for a single scan), which can be repeated to collect multiple positions, if needed for actor model creation. The efficacy of visible light versus other scanner types is also discussed.

  19. Altered Neural Activity during Semantic Object Memory Retrieval in Amnestic Mild Cognitive Impairment as Measured by Event-Related Potentials.

    PubMed

    Chiang, Hsueh-Sheng; Mudar, Raksha A; Pudhiyidath, Athula; Spence, Jeffrey S; Womack, Kyle B; Cullum, C Munro; Tanner, Jeremy A; Eroh, Justin; Kraut, Michael A; Hart, John

    2015-01-01

    Deficits in semantic memory in individuals with amnestic mild cognitive impairment (aMCI) have been previously reported, but the underlying neurobiological mechanisms remain to be clarified. We examined event-related potentials (ERPs) associated with semantic memory retrieval in 16 individuals with aMCI as compared to 17 normal controls using the Semantic Object Retrieval Task (EEG SORT). In this task, subjects judged whether pairs of words (object features) elicited retrieval of an object (retrieval trials) or not (non-retrieval trials). Behavioral findings revealed that aMCI subjects had lower accuracy scores and marginally longer reaction time compared to controls. We used a multivariate analytical technique (STAT-PCA) to investigate similarities and differences in ERPs between aMCI and control groups. STAT-PCA revealed a left fronto-temporal component starting at around 750 ms post-stimulus in both groups. However, unlike controls, aMCI subjects showed an increase in the frontal-parietal scalp potential that distinguished retrieval from non-retrieval trials between 950 and 1050 ms post-stimulus negatively correlated with the performance on the logical memory subtest of the Wechsler Memory Scale-III. Thus, individuals with aMCI were not only impaired in their behavioral performance on SORT relative to controls, but also displayed alteration in the corresponding ERPs. The altered neural activity in aMCI compared to controls suggests a more sustained and effortful search during object memory retrieval, which may be a potential marker indicating disease processes at the pre-dementia stage.

  20. Three Dimensional (3D) Printing: A Straightforward, User-Friendly Protocol to Convert Virtual Chemical Models to Real-Life Objects

    ERIC Educational Resources Information Center

    Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel

    2015-01-01

    A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…

  1. Three Dimensional (3D) Printing: A Straightforward, User-Friendly Protocol to Convert Virtual Chemical Models to Real-Life Objects

    ERIC Educational Resources Information Center

    Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel

    2015-01-01

    A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…

  2. Correction of MR image distortions induced by metallic objects using a 3D cubic B-spline basis set: application to stereotactic surgical planning.

    PubMed

    Skare, S; Andersson, J L R

    2005-07-01

    Metallic implants in MRI cause spin-echo (SE) images to be distorted in the slice and frequency-encoding directions. Chang and Fitzpatrick (IEEE Trans Med Imaging 1992;11:319-329) proposed a distortion correction method (termed the CF method) based on the magnitude images from two SE acquisitions that differ only in the polarity of the frequency-encoding and slice-selection gradients. In the present study we solved some problems with the CF method, primarily by modeling the field inhomogeneities as a single 3D displacement field built by 3D cubic B-splines. The 3D displacement field was applied in the actual distortion direction in the slice/frequency-encoding plane. To account for patient head motion, a 3D rigid body motion correction was also incorporated in the model. Experiments on a phantom containing an aneurysm clip showed that the knot spacing between the B-splines is a very important factor in both the final image quality and the processing speed. Depending on the knot spacing and the image volume size, the number of unknowns range from a few thousands to over 100,000, leading to processing times ranging from minutes to days. Optimal knot spacing, a means of increasing the processing speed, and other parameters are investigated and discussed.

  3. Twin Peaks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The two hills in the distance, approximately one to two kilometers away, have been dubbed the 'Twin Peaks' and are of great interest to Pathfinder scientists as objects of future study. 3D glasses are necessary to identify surface detail. The white areas on the left hill, called the 'Ski Run' by scientists, may have been formed by hydrologic processes.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  4. Hybridization of phase retrieval and off-axis digital holography for high resolution imaging of complex shape objects

    NASA Astrophysics Data System (ADS)

    Wang, Fengpeng; Wang, Dayong; Rong, Lu; Wang, Yunxin; Zhao, Jie

    2017-05-01

    In this paper, a hybrid method of phase retrieval and off-axis digital holography is proposed for imaging of the complex shape objects. Off-axis digital hologram and in-line hologram are recorded. The approximate phase distributions in the recording plane and object plane are obtained by constrained optimization approach from the off-axis holog