Science.gov

Sample records for 3d object representation

  1. Generation of geometric representations of 3D objects in CAD/CAM by digital photogrammetry

    NASA Astrophysics Data System (ADS)

    Li, Rongxing

    This paper presents a method for the generation of geometric representations of 3D objects by digital photogrammetry. In CAD/CAM systems geometric modelers are usually used to create three-dimensional (3D) geometric representations for design and manufacturing purposes. However, in cases where geometric information such as dimensions and shapes of objects are not available, measurements of physically existing objects become necessary. In this paper, geometric parameters of primitives of 3D geometric representations such as Boundary Representation (B-rep), Constructive Solid Geometry (CSG), and digital surface models are determined by digital image matching techniques. An algorithm for reconstruction of surfaces with discontinuities is developed. Interfaces between digital photogrammetric data and these geometric representations are realized. This method can be applied to design and manufacturing in mechanical engineering, automobile industry, robot technology, spatial information systems and others.

  2. The representation of moving 3-D objects in apparent motion perception.

    PubMed

    Hidaka, Souta; Kawachi, Yousuke; Gyoba, Jiro

    2009-08-01

    In the present research, we investigated the depth information contained in the representations of apparently moving 3-D objects. By conducting three experiments, we measured the magnitude of representational momentum (RM) as an index of the consistency of an object's representation. Experiment 1A revealed that RM magnitude was greater when shaded, convex, apparently moving objects shifted to a flat circle than when they shifted to a shaded, concave, hemisphere. The difference diminished when the apparently moving objects were concave hemispheres (Experiment 1B). Using luminance-polarized circles, Experiment 2 confirmed that these results were not due to the luminance information of shading. Experiment 3 demonstrated that RM magnitude was greater when convex apparently moving objects shifted to particular blurred convex hemispheres with low-pass filtering than when they shifted to concave hemispheres. These results suggest that the internal object's representation in apparent motion contains incomplete depth information intermediate between that of 2-D and 3-D objects, particularly with regard to convexity information with low-spatial-frequency components. PMID:19633345

  3. The Visual Representation of 3D Object Orientation in Parietal Cortex

    PubMed Central

    Cowan, Noah J.; Angelaki, Dora E.

    2013-01-01

    An accurate representation of three-dimensional (3D) object orientation is essential for interacting with the environment. Where and how the brain visually encodes 3D object orientation remains unknown, but prior studies suggest the caudal intraparietal area (CIP) may be involved. Here, we develop rigorous analytical methods for quantifying 3D orientation tuning curves, and use these tools to the study the neural coding of surface orientation. Specifically, we show that single neurons in area CIP of the rhesus macaque jointly encode the slant and tilt of a planar surface, and that across the population, the distribution of preferred slant-tilts is not statistically different from uniform. This suggests that all slant-tilt combinations are equally represented in area CIP. Furthermore, some CIP neurons are found to also represent the third rotational degree of freedom that determines the orientation of the image pattern on the planar surface. Together, the present results suggest that CIP is a critical neural locus for the encoding of all three rotational degrees of freedom specifying an object's 3D spatial orientation. PMID:24305830

  4. CAD/CAM/CAE representation of 3D objects measured by fringe projection

    NASA Astrophysics Data System (ADS)

    Pancewicz, Tomasz; Kujawinska, Malgorzata

    1998-07-01

    In the paper the creation of a virtual object on the base of optical measurement of 3D object by fringe projection technique coupled with the capabilities of CAD systems is presented. Basic stages of that task, being the most important part of the reverse engineering process, are discussed and the procedure is formulated by terms and definitions of theory of optimal algorithms. The quality criteria of a virtual object are defined and the influence of consecutive stages of the task on the quality of the virtual object is discussed.

  5. A computational model that recovers the 3D shape of an object from a single 2D retinal representation.

    PubMed

    Li, Yunfeng; Pizlo, Zygmunt; Steinman, Robert M

    2009-05-01

    Human beings perceive 3D shapes veridically, but the underlying mechanisms remain unknown. The problem of producing veridical shape percepts is computationally difficult because the 3D shapes have to be recovered from 2D retinal images. This paper describes a new model, based on a regularization approach, that does this very well. It uses a new simplicity principle composed of four shape constraints: viz., symmetry, planarity, maximum compactness and minimum surface. Maximum compactness and minimum surface have never been used before. The model was tested with random symmetrical polyhedra. It recovered their 3D shapes from a single randomly-chosen 2D image. Neither learning, nor depth perception, was required. The effectiveness of the maximum compactness and the minimum surface constraints were measured by how well the aspect ratio of the 3D shapes was recovered. These constraints were effective; they recovered the aspect ratio of the 3D shapes very well. Aspect ratios recovered by the model were compared to aspect ratios adjusted by four human observers. They also adjusted aspect ratios very well. In those rare cases, in which the human observers showed large errors in adjusted aspect ratios, their errors were very similar to the errors made by the model. PMID:18621410

  6. New method of 3-D object recognition

    NASA Astrophysics Data System (ADS)

    He, An-Zhi; Li, Qun Z.; Miao, Peng C.

    1991-12-01

    In this paper, a new method of 3-D object recognition using optical techniques and a computer is presented. We perform 3-D object recognition using moire contour to obtain the object's 3- D coordinates, projecting drawings of the object in three coordinate planes to describe it and using a method of inquiring library of judgement to match objects. The recognition of a simple geometrical entity is simulated by computer and studied experimentally. The recognition of an object which is composed of a few simple geometrical entities is discussed.

  7. Formal representation of 3D structural geological models

    NASA Astrophysics Data System (ADS)

    Wang, Zhangang; Qu, Honggang; Wu, Zixing; Yang, Hongjun; Du, Qunle

    2016-05-01

    The development and widespread application of geological modeling methods has increased demands for the integration and sharing services of three dimensional (3D) geological data. However, theoretical research in the field of geological information sciences is limited despite the widespread use of Geographic Information Systems (GIS) in geology. In particular, fundamental research on the formal representations and standardized spatial descriptions of 3D structural models is required. This is necessary for accurate understanding and further applications of geological data in 3D space. In this paper, we propose a formal representation method for 3D structural models using the theory of point set topology, which produces a mathematical definition for the major types of geological objects. The spatial relationships between geologic boundaries, structures, and units are explained in detail using the 9-intersection model. Reasonable conditions for describing the topological space of 3D structural models are also provided. The results from this study can be used as potential support for the standardized representation and spatial quality evaluation of 3D structural models, as well as for specific needs related to model-based management, query, and analysis.

  8. Lifting Object Detection Datasets into 3D.

    PubMed

    Carreira, Joao; Vicente, Sara; Agapito, Lourdes; Batista, Jorge

    2016-07-01

    While data has certainly taken the center stage in computer vision in recent years, it can still be difficult to obtain in certain scenarios. In particular, acquiring ground truth 3D shapes of objects pictured in 2D images remains a challenging feat and this has hampered progress in recognition-based object reconstruction from a single image. Here we propose to bypass previous solutions such as 3D scanning or manual design, that scale poorly, and instead populate object category detection datasets semi-automatically with dense, per-object 3D reconstructions, bootstrapped from:(i) class labels, (ii) ground truth figure-ground segmentations and (iii) a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion and then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions. The visual hull sampling process attempts to intersect an object's projection cone with the cones of minimal subsets of other similar objects among those pictured from certain vantage points. We show that our method is able to produce convincing per-object 3D reconstructions and to accurately estimate cameras viewpoints on one of the most challenging existing object-category detection datasets, PASCAL VOC. We hope that our results will re-stimulate interest on joint object recognition and 3D reconstruction from a single image. PMID:27295458

  9. 3D Model Segmentation and Representation with Implicit Polynomials

    NASA Astrophysics Data System (ADS)

    Zheng, Bo; Takamatsu, Jun; Ikeuchi, Katsushi

    When large-scale and complex 3D objects are obtained by range finders, it is often necessary to represent them by algebraic surfaces for such purposes as data compression, multi-resolution, noise elimination, and 3D recognition. Representing the 3D data with algebraic surfaces of an implicit polynomial (IP) has proved to offer the advantages that IP representation is capable of encoding geometric properties easily with desired smoothness, few parameters, algebraic/geometric invariants, and robustness to noise and missing data. Unfortunately, generating a high-degree IP surface for a whole complex 3D shape is impossible because of high computational cost and numerical instability. In this paper we propose a 3D segmentation method based on a cut-and-merge approach. Two cutting procedures adopt low-degree IPs to divide and fit the surface segments simultaneously, while avoiding generating high-curved segments. A merging procedure merges the similar adjacent segments to avoid over-segmentation. To prove the effectiveness of this segmentation method, we open up some new vistas for 3D applications such as 3D matching, recognition, and registration.

  10. 3D modeling of optically challenging objects.

    PubMed

    Park, Johnny; Kak, Avinash

    2008-01-01

    We present a system for constructing 3D models of real-world objects with optically challenging surfaces. The system utilizes a new range imaging concept called multi-peak range imaging, which stores multiple candidates of range measurements for each point on the object surface. The multiple measurements include the erroneous range data caused by various surface properties that are not ideal for structured-light range sensing. False measurements generated by spurious reflections are eliminated by applying a series of constraint tests. The constraint tests based on local surface and local sensor visibility are applied first to individual range images. The constraint tests based on global consistency of coordinates and visibility are then applied to all range images acquired from different viewpoints. We show the effectiveness of our method by constructing 3D models of five different optically challenging objects. To evaluate the performance of the constraint tests and to examine the effects of the parameters used in the constraint tests, we acquired the ground truth data by painting those objects to suppress the surface-related properties that cause difficulties in range sensing. Experimental results indicate that our method significantly improves upon the traditional methods for constructing reliable 3D models of optically challenging objects. PMID:18192707

  11. 3D Modeling Engine Representation Summary Report

    SciTech Connect

    Steven Prescott; Ramprasad Sampath; Curtis Smith; Timothy Yang

    2014-09-01

    Computers have been used for 3D modeling and simulation, but only recently have computational resources been able to give realistic results in a reasonable time frame for large complex models. This summary report addressed the methods, techniques, and resources used to develop a 3D modeling engine to represent risk analysis simulation for advanced small modular reactor structures and components. The simulations done for this evaluation were focused on external events, specifically tsunami floods, for a hypothetical nuclear power facility on a coastline.

  12. Virtual Representations in 3D Learning Environments

    ERIC Educational Resources Information Center

    Shonfeld, Miri; Kritz, Miki

    2013-01-01

    This research explores the extent to which virtual worlds can serve as online collaborative learning environments for students by increasing social presence and engagement. 3D environments enable learning, which simulates face-to-face encounters while retaining the advantages of online learning. Students in Education departments created avatars…

  13. Faint object 3D spectroscopy with PMAS

    NASA Astrophysics Data System (ADS)

    Roth, Martin M.; Becker, Thomas; Kelz, Andreas; Bohm, Petra

    2004-09-01

    PMAS is a fiber-coupled lens array type of integral field spectrograph, which was commissioned at the Calar Alto 3.5m Telescope in May 2001. The optical layout of the instrument was chosen such as to provide a large wavelength coverage, and good transmission from 0.35 to 1 μm. One of the major objectives of the PMAS development has been to perform 3D spectrophotometry, taking advantage of the contiguous array of spatial elements over the 2-dimensional field-of-view of the integral field unit. With science results obtained during the first two years of operation, we illustrate that 3D spectroscopy is an ideal tool for faint object spectrophotometry.

  14. 3D object retrieval using salient views.

    PubMed

    Atmosukarto, Indriyati; Shapiro, Linda G

    2013-06-01

    This paper presents a method for selecting salient 2D views to describe 3D objects for the purpose of retrieval. The views are obtained by first identifying salient points via a learning approach that uses shape characteristics of the 3D points (Atmosukarto and Shapiro in International workshop on structural, syntactic, and statistical pattern recognition, 2008; Atmosukarto and Shapiro in ACM multimedia information retrieval, 2008). The salient views are selected by choosing views with multiple salient points on the silhouette of the object. Silhouette-based similarity measures from Chen et al. (Comput Graph Forum 22(3):223-232, 2003) are then used to calculate the similarity between two 3D objects. Retrieval experiments were performed on three datasets: the Heads dataset, the SHREC2008 dataset, and the Princeton dataset. Experimental results show that the retrieval results using the salient views are comparable to the existing light field descriptor method (Chen et al. in Comput Graph Forum 22(3):223-232, 2003), and our method achieves a 15-fold speedup in the feature extraction computation time. PMID:23833704

  15. 3D object retrieval using salient views

    PubMed Central

    Shapiro, Linda G.

    2013-01-01

    This paper presents a method for selecting salient 2D views to describe 3D objects for the purpose of retrieval. The views are obtained by first identifying salient points via a learning approach that uses shape characteristics of the 3D points (Atmosukarto and Shapiro in International workshop on structural, syntactic, and statistical pattern recognition, 2008; Atmosukarto and Shapiro in ACM multimedia information retrieval, 2008). The salient views are selected by choosing views with multiple salient points on the silhouette of the object. Silhouette-based similarity measures from Chen et al. (Comput Graph Forum 22(3):223–232, 2003) are then used to calculate the similarity between two 3D objects. Retrieval experiments were performed on three datasets: the Heads dataset, the SHREC2008 dataset, and the Princeton dataset. Experimental results show that the retrieval results using the salient views are comparable to the existing light field descriptor method (Chen et al. in Comput Graph Forum 22(3):223–232, 2003), and our method achieves a 15-fold speedup in the feature extraction computation time. PMID:23833704

  16. 3D-dynamic representation of DNA sequences.

    PubMed

    Wąż, Piotr; Bielińska-Wąż, Dorota

    2014-03-01

    A new 3D graphical representation of DNA sequences is introduced. This representation is called 3D-dynamic representation. It is a generalization of the 2D-dynamic dynamic representation. The sequences are represented by sets of "material points" in the 3D space. The resulting 3D-dynamic graphs are treated as rigid bodies. The descriptors characterizing the graphs are analogous to the ones used in the classical dynamics. The classification diagrams derived from this representation are presented and discussed. Due to the third dimension, "the history of the graph" can be recognized graphically because the 3D-dynamic graph does not overlap with itself. Specific parts of the graphs correspond to specific parts of the sequence. This feature is essential for graphical comparisons of the sequences. Numerically, both 2D and 3D approaches are of high quality. In particular, a difference in a single base between two sequences can be identified and correctly described (one can identify which base) by both 2D and 3D methods. PMID:24567158

  17. Developing Spatial Reasoning Through 3D Representations of the Universe

    NASA Astrophysics Data System (ADS)

    Summers, F.; Eisenhamer, B.; McCallister, D.

    2013-12-01

    Mental models of astronomical objects are often greatly hampered by the flat two-dimensional representation of pictures from telescopes. Lacking experience with the true structures in much of the imagery, there is no basis for anything but the default interpretation of a picture postcard. Using astronomical data and scientific visualizations, our professional development session allows teachers and their students to develop their spatial reasoning while forming more accurate and richer mental models. Examples employed in this session include star positions and constellations, morphologies of both normal and interacting galaxies, shapes of planetary nebulae, and three dimensional structures in star forming regions. Participants examine, imagine, predict, and confront the 3D interpretation of well-known 2D imagery using authentic data from NASA, the Hubble Space Telescope, and other scientific sources. The session's cross-disciplinary nature includes science, math, and artistic reasoning while addressing common cosmic misconceptions. Stars of the Orion Constellation seen in 3D explodes the popular misconception that stars in a constellation are all at the same distance. A scientific visualization of two galaxies colliding provides a 3D comparison for Hubble images of interacting galaxies.

  18. PMAS - Faint Object 3D Spectrophotometry

    NASA Astrophysics Data System (ADS)

    Roth, M. M.; Becker, T.; Kelz, A.

    2002-01-01

    will describe PMAS (Potsdam Multiaperture Spectrophotometer) which was commissioned at the Calar Alto Observatory 3.5m Telescope on May 28-31, 2001. PMAS is a dedicated, highly efficient UV-visual integral field spectrograph which is optimized for the spectrophotometry of faint point sources, typically superimposed on a bright background. PMAS is ideally suited for the study of resolved stars in local group galaxies. I will present results of our preliminary work with MPFS at the Russian 6m Telescope in Selentchuk, involving the development of new 3D data reduction software, and observations of faint planetary nebulae in the bulge of M31 for the determination of individual chemical abundances of these objects. Using this data, it will be demonstrated that integral field spectroscopy provides superior techniques for background subtraction, avoiding the otherwise inevitable systematic errors of conventional slit spetroscopy. The results will be put in perspective of the study of resolved stellar populations in nearby galaxies with a new generation of Extremely Large Telescopes.

  19. Object-oriented urban 3D spatial data model organization method

    NASA Astrophysics Data System (ADS)

    Li, Jing-wen; Li, Wen-qing; Lv, Nan; Su, Tao

    2015-12-01

    This paper combined the 3d data model with object-oriented organization method, put forward the model of 3d data based on object-oriented method, implemented the city 3d model to quickly build logical semantic expression and model, solved the city 3d spatial information representation problem of the same location with multiple property and the same property with multiple locations, designed the space object structure of point, line, polygon, body for city of 3d spatial database, and provided a new thought and method for the city 3d GIS model and organization management.

  20. Visual inertia of rotating 3-D objects.

    PubMed

    Jiang, Y; Pantle, A J; Mark, L S

    1998-02-01

    Five experiments were designed to determine whether a rotating, transparent 3-D cloud of dots (simulated sphere) could influence the perceived direction of rotation of a subsequent sphere. Experiment 1 established conditions under which the direction of rotation of a virtual sphere was perceived unambiguously. When a near-far luminance difference and perspective depth cues were present, observers consistently saw the sphere rotate in the intended direction. In Experiment 2, a near-far luminance difference was used to create an unambiguous rotation sequence that was followed by a directionally ambiguous rotation sequence that lacked both the near-far luminance cue and the perspective cue. Observers consistently saw the second sequence as rotating in the same direction as the first, indicating the presence of 3-D visual inertia. Experiment 3 showed that 3-D visual inertia was sufficiently powerful to bias the perceived direction of a rotation sequence made unambiguous by a near-far luminance cue. Experiment 5 showed that 3-D visual inertia could be obtained using an occlusion depth cue to create an unambiguous inertia-inducing sequence. Finally, Experiments 2, 4, and 5 all revealed a fast-decay phase of inertia that lasted for approximately 800 msec, followed by an asymptotic phase that lasted for periods as long as 1,600 msec. The implications of these findings are examined with respect to motion mechanisms of 3-D visual inertia. PMID:9529911

  1. 3-D Object Recognition from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  2. Technical note: 3D representation and analysis of enthesis morphology.

    PubMed

    Noldner, Lara K; Edgar, Heather J H

    2013-11-01

    This comparison of methods for assessing the development of muscle insertion sites, or entheses, suggests that three-dimensional (3D) quantification of enthesis morphology can produce a picture of habitual muscle use patterns in a past population that is similar to one produced by ordinal scores for describing enthesis morphology. Upper limb skeletal elements (humeri, radii, and ulnae) from a sample of 24 middle-aged adult males from the Pottery Mound site in New Mexico were analyzed for both fibrous and fibrocartilaginous enthesis development with three different methods: ordinal scores, two-dimensional (2D) area measurements, and 3D surface areas. The methods were compared using tests for asymmetry and correlations among variables in each quantitative data set. 2D representations of enthesis area did not agree as closely as ordinal scores and 3D surface areas did regarding which entheses were significantly asymmetrical. There was significant correlation between 3D and 2D data, but correlation coefficients were not consistently high. Intraobserver error was also assessed for the 3D method. Cronbach's alpha values fell between 0.68 and 0.73, and error rates for all entheses fell between 10% and 15%. Marginally acceptable intraobserver error and the analytic versatility of 3D images encourage further investigation of using 3D scanning technology for quantifying enthesis development. PMID:24105032

  3. Rapid 360 degree imaging and stitching of 3D objects using multiple precision 3D cameras

    NASA Astrophysics Data System (ADS)

    Lu, Thomas; Yin, Stuart; Zhang, Jianzhong; Li, Jiangan; Wu, Frank

    2008-02-01

    In this paper, we present the system architecture of a 360 degree view 3D imaging system. The system consists of multiple 3D sensors synchronized to take 3D images around the object. Each 3D camera employs a single high-resolution digital camera and a color-coded light projector. The cameras are synchronized to rapidly capture the 3D and color information of a static object or a live person. The color encoded structure lighting ensures the precise reconstruction of the depth of the object. A 3D imaging system architecture is presented. The architecture employs the displacement of the camera and the projector to triangulate the depth information. The 3D camera system has achieved high depth resolution down to 0.1mm on a human head sized object and 360 degree imaging capability.

  4. 3D ear identification based on sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person's identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm. PMID:24740247

  5. 3D PDF - a means of public access to geological 3D - objects, using the example of GTA3D

    NASA Astrophysics Data System (ADS)

    Slaby, Mark-Fabian; Reimann, Rüdiger

    2013-04-01

    In geology, 3D modeling has become very important. In the past, two-dimensional data such as isolines, drilling profiles, or cross-sections based on those, were used to illustrate the subsurface geology, whereas now, we can create complex digital 3D models. These models are produced with special software, such as GOCAD ®. The models can be viewed, only through the software used to create them, or through viewers available for free. The platform-independent PDF (Portable Document Format), enforced by Adobe, has found a wide distribution. This format has constantly evolved over time. Meanwhile, it is possible to display CAD data in an Adobe 3D PDF file with the free Adobe Reader (version 7). In a 3D PDF, a 3D model is freely rotatable and can be assembled from a plurality of objects, which can thus be viewed from all directions on their own. In addition, it is possible to create moveable cross-sections (profiles), and to assign transparency to the objects. Based on industry-standard CAD software, 3D PDFs can be generated from a large number of formats, or even be exported directly from this software. In geoinformatics, different approaches to creating 3D PDFs exist. The intent of the Authority for Mining, Energy and Geology to allow free access to the models of the Geotectonic Atlas (GTA3D), could not be realized with standard software solutions. A specially designed code converts the 3D objects to VRML (Virtual Reality Modeling Language). VRML is one of the few formats that allow using image files (maps) as textures, and to represent colors and shapes correctly. The files were merged in Acrobat X Pro, and a 3D PDF was generated subsequently. A topographic map, a display of geographic directions and horizontal and vertical scales help to facilitate the use.

  6. Interactive photogrammetric system for mapping 3D objects

    NASA Astrophysics Data System (ADS)

    Knopp, Dave E.

    1990-08-01

    A new system, FOTO-G, has been developed for 3D photogrammetric applications. It is a production-oriented software system designed to work with highly unconventional photogrammetric image configurations which result when photographing 3D objects. A demonstration with imagery from an actual 3D-mapping project is reported.

  7. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    NASA Astrophysics Data System (ADS)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  8. Extension of RCC Topological Relations for 3d Complex Objects Components Extracted from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Xing, Xu-Feng; Abolfazl Mostafavia, Mir; Wang, Chen

    2016-06-01

    Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in R3. Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in R3. We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.

  9. 3D model-based still image object categorization

    NASA Astrophysics Data System (ADS)

    Petre, Raluca-Diana; Zaharia, Titus

    2011-09-01

    This paper proposes a novel recognition scheme algorithm for semantic labeling of 2D object present in still images. The principle consists of matching unknown 2D objects with categorized 3D models in order to infer the semantics of the 3D object to the image. We tested our new recognition framework by using the MPEG-7 and Princeton 3D model databases in order to label unknown images randomly selected from the web. Results obtained show promising performances, with recognition rate up to 84%, which opens interesting perspectives in terms of semantic metadata extraction from still images/videos.

  10. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  11. 3-D object-oriented image analysis of geophysical data

    NASA Astrophysics Data System (ADS)

    Fadel, I.; Kerle, N.; van der Meijde, M.

    2014-07-01

    Geophysical data are the main source of information about the subsurface. Geophysical techniques are, however, highly non-unique in determining specific physical parameters and boundaries of subsurface objects. To obtain actual physical information, an inversion process is often applied, in which measurements at or above the Earth surface are inverted into a 2- or 3-D subsurface spatial distribution of the physical property. Interpreting these models into structural objects, related to physical processes, requires a priori knowledge and expert analysis which is susceptible to subjective choices and is therefore often non-repeatable. In this research, we implemented a recently introduced object-based approach to interpret the 3-D inversion results of a single geophysical technique using the available a priori information and the physical and geometrical characteristics of the interpreted objects. The introduced methodology is semi-automatic and repeatable, and allows the extraction of subsurface structures using 3-D object-oriented image analysis (3-D OOA) in an objective knowledge-based classification scheme. The approach allows for a semi-objective setting of thresholds that can be tested and, if necessary, changed in a very fast and efficient way. These changes require only changing the thresholds used in a so-called ruleset, which is composed of algorithms that extract objects from a 3-D data cube. The approach is tested on a synthetic model, which is based on a priori knowledge on objects present in the study area (Tanzania). Object characteristics and thresholds were well defined in a 3-D histogram of velocity versus depth, and objects were fully retrieved. The real model results showed how 3-D OOA can deal with realistic 3-D subsurface conditions in which the boundaries become fuzzy, the object extensions become unclear and the model characteristics vary with depth due to the different physical conditions. As expected, the 3-D histogram of the real data was

  12. An Evaluative Review of Simulated Dynamic Smart 3d Objects

    NASA Astrophysics Data System (ADS)

    Romeijn, H.; Sheth, F.; Pettit, C. J.

    2012-07-01

    Three-dimensional (3D) modelling of plants can be an asset for creating agricultural based visualisation products. The continuum of 3D plants models ranges from static to dynamic objects, also known as smart 3D objects. There is an increasing requirement for smarter simulated 3D objects that are attributed mathematically and/or from biological inputs. A systematic approach to plant simulation offers significant advantages to applications in agricultural research, particularly in simulating plant behaviour and the influences of external environmental factors. This approach of 3D plant object visualisation is primarily evident from the visualisation of plants using photographed billboarded images, to more advanced procedural models that come closer to simulating realistic virtual plants. However, few programs model physical reactions of plants to external factors and even fewer are able to grow plants based on mathematical and/or biological parameters. In this paper, we undertake an evaluation of plant-based object simulation programs currently available, with a focus upon the components and techniques involved in producing these objects. Through an analytical review process we consider the strengths and weaknesses of several program packages, the features and use of these programs and the possible opportunities in deploying these for creating smart 3D plant-based objects to support agricultural research and natural resource management. In creating smart 3D objects the model needs to be informed by both plant physiology and phenology. Expert knowledge will frame the parameters and procedures that will attribute the object and allow the simulation of dynamic virtual plants. Ultimately, biologically smart 3D virtual plants that react to changes within an environment could be an effective medium to visually represent landscapes and communicate land management scenarios and practices to planners and decision-makers.

  13. 3D laser imaging for concealed object identification

    NASA Astrophysics Data System (ADS)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  14. Learning from graphically integrated 2D and 3D representations improves retention of neuroanatomy

    NASA Astrophysics Data System (ADS)

    Naaz, Farah

    Visualizations in the form of computer-based learning environments are highly encouraged in science education, especially for teaching spatial material. Some spatial material, such as sectional neuroanatomy, is very challenging to learn. It involves learning the two dimensional (2D) representations that are sampled from the three dimensional (3D) object. In this study, a computer-based learning environment was used to explore the hypothesis that learning sectional neuroanatomy from a graphically integrated 2D and 3D representation will lead to better learning outcomes than learning from a sequential presentation. The integrated representation explicitly demonstrates the 2D-3D transformation and should lead to effective learning. This study was conducted using a computer graphical model of the human brain. There were two learning groups: Whole then Sections, and Integrated 2D3D. Both groups learned whole anatomy (3D neuroanatomy) before learning sectional anatomy (2D neuroanatomy). The Whole then Sections group then learned sectional anatomy using 2D representations only. The Integrated 2D3D group learned sectional anatomy from a graphically integrated 3D and 2D model. A set of tests for generalization of knowledge to interpreting biomedical images was conducted immediately after learning was completed. The order of presentation of the tests of generalization of knowledge was counterbalanced across participants to explore a secondary hypothesis of the study: preparation for future learning. If the computer-based instruction programs used in this study are effective tools for teaching anatomy, the participants should continue learning neuroanatomy with exposure to new representations. A test of long-term retention of sectional anatomy was conducted 4-8 weeks after learning was completed. The Integrated 2D3D group was better than the Whole then Sections

  15. Embedding objects during 3D printing to add new functionalities.

    PubMed

    Yuen, Po Ki

    2016-07-01

    A novel method for integrating and embedding objects to add new functionalities during 3D printing based on fused deposition modeling (FDM) (also known as fused filament fabrication or molten polymer deposition) is presented. Unlike typical 3D printing, FDM-based 3D printing could allow objects to be integrated and embedded during 3D printing and the FDM-based 3D printed devices do not typically require any post-processing and finishing. Thus, various fluidic devices with integrated glass cover slips or polystyrene films with and without an embedded porous membrane, and optical devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber were 3D printed to demonstrate the versatility of the FDM-based 3D printing and embedding method. Fluid perfusion flow experiments with a blue colored food dye solution were used to visually confirm fluid flow and/or fluid perfusion through the embedded porous membrane in the 3D printed fluidic devices. Similar to typical 3D printed devices, FDM-based 3D printed devices are translucent at best unless post-polishing is performed and optical transparency is highly desirable in any fluidic devices; integrated glass cover slips or polystyrene films would provide a perfect optical transparent window for observation and visualization. In addition, they also provide a compatible flat smooth surface for biological or biomolecular applications. The 3D printed fluidic devices with an embedded porous membrane are applicable to biological or chemical applications such as continuous perfusion cell culture or biocatalytic synthesis but without the need for any post-device assembly and finishing. The 3D printed devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber would have applications in display, illumination, or optical applications. Furthermore, the FDM-based 3D printing and embedding method could also be utilized to print casting molds with an integrated glass bottom for polydimethylsiloxane (PDMS) device replication

  16. Tomographic compressive holographic reconstruction of 3D objects

    NASA Astrophysics Data System (ADS)

    Nehmetallah, G.; Williams, L.; Banerjee, P. P.

    2012-10-01

    Compressive holography with multiple projection tomography is applied to solve the inverse ill-posed problem of reconstruction of 3D objects with high axial accuracy. To visualize the 3D shape, we propose Digital Tomographic Compressive Holography (DiTCH), where projections from more than one direction as in tomographic imaging systems can be employed, so that a 3D shape with better axial resolution can be reconstructed. We compare DiTCH with single-beam holographic tomography (SHOT) which is based on Fresnel back-propagation. A brief theory of DiTCH is presented, and experimental results of 3D shape reconstruction of objects using DITCH and SHOT are compared.

  17. Representing 3D virtual objects: interaction between visuo-spatial ability and type of exploration.

    PubMed

    Meijer, Frank; van den Broek, Egon L

    2010-03-17

    We investigated individual differences in interactively exploring 3D virtual objects. 36 participants explored 24 simple and 24 difficult objects (composed of respectively three and five Biederman geons) actively, passively, or not at all. Both their 3D mental representation of the objects and visuo-spatial ability was assessed. Results show that, regardless of the object's complexity, people with a low VSA benefit from active exploration of objects, where people with a middle or high VSA do not. These findings extend and refine earlier research on interactively learning visuo-spatial information and underline the importance to take individual differences into account. PMID:20116394

  18. A Primitive-Based 3D Object Recognition System

    NASA Astrophysics Data System (ADS)

    Dhawan, Atam P.

    1988-08-01

    A knowledge-based 3D object recognition system has been developed. The system uses the hierarchical structural, geometrical and relational knowledge in matching the 3D object models to the image data through pre-defined primitives. The primitives, we have selected, to begin with, are 3D boxes, cylinders, and spheres. These primitives as viewed from different angles covering complete 3D rotation range are stored in a "Primitive-Viewing Knowledge-Base" in form of hierarchical structural and relational graphs. The knowledge-based system then hypothesizes about the viewing angle and decomposes the segmented image data into valid primitives. A rough 3D structural and relational description is made on the basis of recognized 3D primitives. This description is now used in the detailed high-level frame-based structural and relational matching. The system has several expert and knowledge-based systems working in both stand-alone and cooperative modes to provide multi-level processing. This multi-level processing utilizes both bottom-up (data-driven) and top-down (model-driven) approaches in order to acquire sufficient knowledge to accept or reject any hypothesis for matching or recognizing the objects in the given image.

  19. An object-oriented 3D integral data model for digital city and digital mine

    NASA Astrophysics Data System (ADS)

    Wu, Lixin; Wang, Yanbing; Che, Defu; Xu, Lei; Chen, Xuexi; Jiang, Yun; Shi, Wenzhong

    2005-10-01

    With the rapid development of urban, city space extended from surface to subsurface. As the important data source for the representation of city spatial information, 3D city spatial data have the characteristics of multi-object, heterogeneity and multi-structure. It could be classified referring to the geo-surface into three kinds: above-surface data, surface data and subsurface data. The current research on 3D city spatial information system is divided naturally into two different branch, 3D City GIS (3D CGIS) and 3D Geological Modeling (3DGM). The former emphasizes on the 3D visualization of buildings and the terrain of city, while the latter emphasizes on the visualization of geological bodies and structures. Although, it is extremely important for city planning and construction to integrate all the city spatial information including above-surface, surface and subsurface objects to conduct integral analysis and spatial manipulation. However, either 3D CGIS or 3DGM is currently difficult to realize the information integration, integral analysis and spatial manipulation. Considering 3D spatial modeling theory and methodologies, an object-oriented 3D integral spatial data model (OO3D-ISDM) is presented and software realized. The model integrates geographical objects, surface buildings and geological objects together seamlessly with TIN being its coupling interface. This paper introduced the conceptual model of OO3D-ISDM, which is comprised of 4 spatial elements, i.e. point, line, face and body, and 4 geometric primitives, i.e. vertex, segment, triangle and generalized tri-prism (GTP). The spatial model represents the geometry of surface buildings and geographical objects with triangles, and geological objects with GTP. Any of the represented objects, no mater surface buildings, terrain or subsurface objects, could be described with the basic geometry element, i.e. triangle. So the 3D spatial objects, surface buildings, terrain and geological objects can be

  20. Algorithms for Haptic Rendering of 3D Objects

    NASA Technical Reports Server (NTRS)

    Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam

    2003-01-01

    Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).

  1. Key techniques for vision measurement of 3D object surface

    NASA Astrophysics Data System (ADS)

    Yang, Huachao; Zhang, Shubi; Guo, Guangli; Liu, Chao; Yu, Ruipeng

    2006-11-01

    Digital close-range photogrammetry system and machine vision are widely used in production control, quality inspection. The main aim is to provide accurate 3D objects or reconstruction of an object surface and give an expression to an object shape. First, the key techniques of camera calibration and target image positioning for 3D object surface vision measurement were briefly reviewed and analyzed in this paper. Then, an innovative and effect method for precise space coordinates measurements was proposed. Test research proved that the thought and methods we proposed about image segmentation, detection and positioning of circular marks were effective and valid. A propriety weight value for adding parameters, control points and orientation elements in bundle adjustment with self-calibration are advantageous to gaining high accuracy of space coordinates. The RMS error of check points is less than +/-1 mm, which can meet the requirement in industrial measurement with high accuracy.

  2. 3-D object recognition using 2-D views.

    PubMed

    Li, Wenjing; Bebis, George; Bourbakis, Nikolaos G

    2008-11-01

    We consider the problem of recognizing 3-D objects from 2-D images using geometric models and assuming different viewing angles and positions. Our goal is to recognize and localize instances of specific objects (i.e., model-based) in a scene. This is in contrast to category-based object recognition methods where the goal is to search for instances of objects that belong to a certain visual category (e.g., faces or cars). The key contribution of our work is improving 3-D object recognition by integrating Algebraic Functions of Views (AFoVs), a powerful framework for predicting the geometric appearance of an object due to viewpoint changes, with indexing and learning. During training, we compute the space of views that groups of object features can produce under the assumption of 3-D linear transformations, by combining a small number of reference views that contain the object features using AFoVs. Unrealistic views (e.g., due to the assumption of 3-D linear transformations) are eliminated by imposing a pair of rigidity constraints based on knowledge of the transformation between the reference views of the object. To represent the space of views that an object can produce compactly while allowing efficient hypothesis generation during recognition, we propose combining indexing with learning in two stages. In the first stage, we sample the space of views of an object sparsely and represent information about the samples using indexing. In the second stage, we build probabilistic models of shape appearance by sampling the space of views of the object densely and learning the manifold formed by the samples. Learning employs the Expectation-Maximization (EM) algorithm and takes place in a "universal," lower-dimensional, space computed through Random Projection (RP). During recognition, we extract groups of point features from the scene and we use indexing to retrieve the most feasible model groups that might have produced them (i.e., hypothesis generation). The likelihood

  3. Human efficiency for recognizing 3-D objects in luminance noise.

    PubMed

    Tjan, B S; Braje, W L; Legge, G E; Kersten, D

    1995-11-01

    The purpose of this study was to establish how efficiently humans use visual information to recognize simple 3-D objects. The stimuli were computer-rendered images of four simple 3-D objects--wedge, cone, cylinder, and pyramid--each rendered from 8 randomly chosen viewing positions as shaded objects, line drawings, or silhouettes. The objects were presented in static, 2-D Gaussian luminance noise. The observer's task was to indicate which of the four objects had been presented. We obtained human contrast thresholds for recognition, and compared these to an ideal observer's thresholds to obtain efficiencies. In two auxiliary experiments, we measured efficiencies for object detection and letter recognition. Our results showed that human object-recognition efficiency is low (3-8%) when compared to efficiencies reported for some other visual-information processing tasks. The low efficiency means that human recognition performance is limited primarily by factors intrinsic to the observer rather than the information content of the stimuli. We found three factors that play a large role in accounting for low object-recognition efficiency: stimulus size, spatial uncertainty, and detection efficiency. Four other factors play a smaller role in limiting object-recognition efficiency: observers' internal noise, stimulus rendering condition, stimulus familiarity, and categorization across views. PMID:8533342

  4. Shape representation for efficient landmark-based segmentation in 3-d.

    PubMed

    Ibragimov, Bulat; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2014-04-01

    In this paper, we propose a novel approach to landmark-based shape representation that is based on transportation theory, where landmarks are considered as sources and destinations, all possible landmark connections as roads, and established landmark connections as goods transported via these roads. Landmark connections, which are selectively established, are identified through their statistical properties describing the shape of the object of interest, and indicate the least costly roads for transporting goods from sources to destinations. From such a perspective, we introduce three novel shape representations that are combined with an existing landmark detection algorithm based on game theory. To reduce computational complexity, which results from the extension from 2-D to 3-D segmentation, landmark detection is augmented by a concept known in game theory as strategy dominance. The novel shape representations, game-theoretic landmark detection and strategy dominance are combined into a segmentation framework that was evaluated on 3-D computed tomography images of lumbar vertebrae and femoral heads. The best shape representation yielded symmetric surface distance of 0.75 mm and 1.11 mm, and Dice coefficient of 93.6% and 96.2% for lumbar vertebrae and femoral heads, respectively. By applying strategy dominance, the computational costs were further reduced for up to three times. PMID:24710155

  5. Consistent representations of and conversions between 3D rotations

    NASA Astrophysics Data System (ADS)

    Rowenhorst, D.; Rollett, A. D.; Rohrer, G. S.; Groeber, M.; Jackson, M.; Konijnenberg, P. J.; De Graef, M.

    2015-12-01

    In materials science the orientation of a crystal lattice is described by means of a rotation relative to an external reference frame. A number of rotation representations are in use, including Euler angles, rotation matrices, unit quaternions, Rodrigues-Frank vectors and homochoric vectors. Each representation has distinct advantages and disadvantages with respect to the ease of use for calculations and data visualization. It is therefore convenient to be able to easily convert from one representation to another. However, historically, each representation has been implemented using a set of often tacit conventions; separate research groups would implement different sets of conventions, thereby making the comparison of methods and results difficult and confusing. This tutorial article aims to resolve these ambiguities and provide a consistent set of conventions and conversions between common rotational representations, complete with worked examples and a discussion of the trade-offs necessary to resolve all ambiguities. Additionally, an open source Fortran-90 library of conversion routines for the different representations is made available to the community.

  6. The Visual Priming of Motion-Defined 3D Objects

    PubMed Central

    Jiang, Xiong; Jiang, Yang

    2015-01-01

    The perception of a stimulus can be influenced by previous perceptual experience, a phenomenon known as perceptual priming. However, there has been limited investigation on perceptual priming of shape perception of three-dimensional object structures defined by moving dots. Here we examined the perceptual priming of a 3D object shape defined purely by motion-in-depth cues (i.e., Shape-From-Motion, SFM) using a classic prime-target paradigm. The results from the first two experiments revealed a significant increase in accuracy when a “cloudy” SFM stimulus (whose object structure was difficult to recognize due to the presence of strong noise) was preceded by an unambiguous SFM that clearly defined the same transparent 3D shape. In contrast, results from Experiment 3 revealed no change in accuracy when a “cloudy” SFM stimulus was preceded by a static shape or a semantic word that defined the same object shape. Instead, there was a significant decrease in accuracy when preceded by a static shape or a semantic word that defined a different object shape. These results suggested that the perception of a noisy SFM stimulus can be facilitated by a preceding unambiguous SFM stimulus—but not a static image or a semantic stimulus—that defined the same shape. The potential neural and computational mechanisms underlying the difference in priming are discussed. PMID:26658496

  7. Neural network techniques for invariant recognition and motion tracking of 3-D objects

    SciTech Connect

    Hwang, J.N.; Tseng, Y.H.

    1995-12-31

    Invariant recognition and motion tracking of 3-D objects under partial object viewing are difficult tasks. In this paper, we introduce a new neural network solution that is robust to noise corruption and partial viewing of objects. This method directly utilizes the acquired range data and requires no feature extraction. In the proposed approach, the object is first parametrically represented by a continuous distance transformation neural network (CDTNN) which is trained by the surface points of the exemplar object. When later presented with the surface points of an unknown object, this parametric representation allows the mismatch information to back-propagate through the CDTNN to gradually determine the best similarity transformation (translation and rotation) of the unknown object. The mismatch can be directly measured in the reconstructed representation domain between the model and the unknown object.

  8. Large-scale objective phenotyping of 3D facial morphology

    PubMed Central

    Hammond, Peter; Suttie, Michael

    2012-01-01

    Abnormal phenotypes have played significant roles in the discovery of gene function, but organized collection of phenotype data has been overshadowed by developments in sequencing technology. In order to study phenotypes systematically, large-scale projects with standardized objective assessment across populations are considered necessary. The report of the 2006 Human Variome Project meeting recommended documentation of phenotypes through electronic means by collaborative groups of computational scientists and clinicians using standard, structured descriptions of disease-specific phenotypes. In this report, we describe progress over the past decade in 3D digital imaging and shape analysis of the face, and future prospects for large-scale facial phenotyping. Illustrative examples are given throughout using a collection of 1107 3D face images of healthy controls and individuals with a range of genetic conditions involving facial dysmorphism. PMID:22434506

  9. Automated full-3D shape measurement of cultural heritage objects

    NASA Astrophysics Data System (ADS)

    Sitnik, Robert; Karaszewski, Maciej; Zaluski, Wojciech; Bolewicki, Pawel

    2009-07-01

    In this paper a fully automated 3D shape measurement system is presented. It consists of rotary stage for cultural heritage objects placement, vertical linear stage with mounted robot arm (with six degrees of freedom) and structured light measurement set-up mounted to its head. All these manipulation devices are automatically controlled by collision detection and next-best-view calculation modules. The goal of whole system is to automatically (without any user attention) and rapidly (from days and weeks to hours) measure whole object. Measurement head is automatically calibrated by the system and its possible working volume starts from centimeters and ends up to one meter. We present some measurement results with different working scenarios along with discussion about its possible applications.

  10. Fully automatic 3D digitization of unknown objects

    NASA Astrophysics Data System (ADS)

    Rozenwald, Gabriel F.; Seulin, Ralph; Fougerolle, Yohan D.

    2010-01-01

    This paper presents a complete system for 3D digitization of objects assuming no prior knowledge on its shape. The proposed methodology is applied to a digitization cell composed of a fringe projection scanner head, a robotic arm with 6 degrees of freedom (DoF), and a turntable. A two-step approach is used to automatically guide the scanning process. The first step uses the concept of Mass Vector Chains (MVC) to perform an initial scanning. The second step directs the scanner to remaining holes of the model. Post-processing of the data is also addressed. Tests with real objects were performed and results of digitization length in time and number of views are provided along with estimated surface coverage.

  11. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  12. Objective 3D face recognition: Evolution, approaches and challenges.

    PubMed

    Smeets, Dirk; Claes, Peter; Vandermeulen, Dirk; Clement, John Gerald

    2010-09-10

    Face recognition is a natural human ability and a widely accepted identification and authentication method. In modern legal settings, a lot of credence is placed on identifications made by eyewitnesses. Consequently these are based on human perception which is often flawed and can lead to situations where identity is disputed. Therefore, there is a clear need to secure identifications in an objective way based on anthropometric measures. Anthropometry has existed for many years and has evolved with each advent of new technology and computing power. As a result of this, face recognition methodology has shifted from a purely 2D image-based approach to the use of 3D facial shape. However, one of the main challenges still remaining is the non-rigid structure of the face, which can change permanently over varying time-scales and briefly with facial expressions. The majority of face recognition methods have been developed by scientists with a very technical background such as biometry, pattern recognition and computer vision. This article strives to bridge the gap between these communities and the forensic science end-users. A concise review of face recognition using 3D shape is given. Methods using 3D shape applied to data embodying facial expressions are tabulated for reference. From this list a categorization of different strategies to deal with expressions is presented. The underlying concepts and practical issues relating to the application of each strategy are given, without going into technical details. The discussion clearly articulates the justification to establish archival, reference databases to compare and evaluate different strategies. PMID:20395086

  13. 3-D attitude representation of human joints: a standardization proposal.

    PubMed

    Woltring, H J

    1994-12-01

    In view of the singularities, asymmetries and other adverse properties of existing, three-dimensional definitions for joint and segment angles, the present paper proposes a new convention for unambiguous and easily interpretable, 3-D joint angles, based on the concept of the attitude 'vector' as derived from Euler's theorem. The suggested standard can be easily explained to non-mathematically trained clinicians, is readily implemented in software, and can be simply related to classical Cardanic/Eulerian angles. For 'planar' rotations about a coordinate system's axes, the proposed convention coincides with the Cardanic convention. The attitude vector dispenses with the 'gimbal-lock' and non-orthogonality disadvantages of Cardanic/Eulerian conventions; therefore, its components have better metrical properties, and they are less sensitive to measurement errors and to coordinate system uncertainties than Cardanic/Eulerian angles. A sensitivity analysis and a physical interpretation of the proposed standard are given, and some experimental results that demonstrate its advantages. PMID:7806549

  14. Large distance 3D imaging of hidden objects

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  15. Eye Tracking to Explore the Impacts of Photorealistic 3d Representations in Pedstrian Navigation Performance

    NASA Astrophysics Data System (ADS)

    Dong, Weihua; Liao, Hua

    2016-06-01

    Despite the now-ubiquitous two-dimensional (2D) maps, photorealistic three-dimensional (3D) representations of cities (e.g., Google Earth) have gained much attention by scientists and public users as another option. However, there is no consistent evidence on the influences of 3D photorealism on pedestrian navigation. Whether 3D photorealism can communicate cartographic information for navigation with higher effectiveness and efficiency and lower cognitive workload compared to the traditional symbolic 2D maps remains unknown. This study aims to explore whether the photorealistic 3D representation can facilitate processes of map reading and navigation in digital environments using a lab-based eye tracking approach. Here we show the differences of symbolic 2D maps versus photorealistic 3D representations depending on users' eye-movement and navigation behaviour data. We found that the participants using the 3D representation were less effective, less efficient and were required higher cognitive workload than using the 2D map for map reading. However, participants using the 3D representation performed more efficiently in self-localization and orientation at the complex decision points. The empirical results can be helpful to improve the usability of pedestrian navigation maps in future designs.

  16. Learning the 3-D structure of objects from 2-D views depends on shape, not format

    PubMed Central

    Tian, Moqian; Yamins, Daniel; Grill-Spector, Kalanit

    2016-01-01

    Humans can learn to recognize new objects just from observing example views. However, it is unknown what structural information enables this learning. To address this question, we manipulated the amount of structural information given to subjects during unsupervised learning by varying the format of the trained views. We then tested how format affected participants' ability to discriminate similar objects across views that were rotated 90° apart. We found that, after training, participants' performance increased and generalized to new views in the same format. Surprisingly, the improvement was similar across line drawings, shape from shading, and shape from shading + stereo even though the latter two formats provide richer depth information compared to line drawings. In contrast, participants' improvement was significantly lower when training used silhouettes, suggesting that silhouettes do not have enough information to generate a robust 3-D structure. To test whether the learned object representations were format-specific or format-invariant, we examined if learning novel objects from example views transfers across formats. We found that learning objects from example line drawings transferred to shape from shading and vice versa. These results have important implications for theories of object recognition because they suggest that (a) learning the 3-D structure of objects does not require rich structural cues during training as long as shape information of internal and external features is provided and (b) learning generates shape-based object representations independent of the training format. PMID:27153196

  17. Learning the 3-D structure of objects from 2-D views depends on shape, not format.

    PubMed

    Tian, Moqian; Yamins, Daniel; Grill-Spector, Kalanit

    2016-05-01

    Humans can learn to recognize new objects just from observing example views. However, it is unknown what structural information enables this learning. To address this question, we manipulated the amount of structural information given to subjects during unsupervised learning by varying the format of the trained views. We then tested how format affected participants' ability to discriminate similar objects across views that were rotated 90° apart. We found that, after training, participants' performance increased and generalized to new views in the same format. Surprisingly, the improvement was similar across line drawings, shape from shading, and shape from shading + stereo even though the latter two formats provide richer depth information compared to line drawings. In contrast, participants' improvement was significantly lower when training used silhouettes, suggesting that silhouettes do not have enough information to generate a robust 3-D structure. To test whether the learned object representations were format-specific or format-invariant, we examined if learning novel objects from example views transfers across formats. We found that learning objects from example line drawings transferred to shape from shading and vice versa. These results have important implications for theories of object recognition because they suggest that (a) learning the 3-D structure of objects does not require rich structural cues during training as long as shape information of internal and external features is provided and (b) learning generates shape-based object representations independent of the training format. PMID:27153196

  18. A Skeleton-Based 3D Shape Reconstruction of Free-Form Objects with Stereo Vision

    NASA Astrophysics Data System (ADS)

    Saini, Deepika; Kumar, Sanjeev

    2015-12-01

    In this paper, an efficient approach is proposed for recovering the 3D shape of a free-form object from its arbitrary pair of stereo images. In particular, the reconstruction problem is treated as the reconstruction of the skeleton and the external boundary of the object. The reconstructed skeleton is termed as the line-like representation or curve-skeleton of the 3D object. The proposed solution for object reconstruction is based on this evolved curve-skeleton. It is used as a seed for recovering shape of the 3D object, and the extracted boundary is used for terminating the growing process of the object. NURBS-skeleton is used to extract the skeleton of both views. Affine invariant property of the convex hulls is used to establish the correspondence between the skeletons and boundaries in the stereo images. In the growing process, a distance field is defined for each skeleton point as the smallest distance from that point to the boundary of the object. A sphere centered at a skeleton point of radius equal to the minimum distance to the boundary is tangential to the boundary. Filling in the spheres centered at each skeleton point reconstructs the object. Several results are presented in order to check the applicability and validity of the proposed algorithm.

  19. 3D hierarchical spatial representation and memory of multimodal sensory data

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Dow, Paul A.; Huber, David J.

    2009-04-01

    This paper describes an efficient method and system for representing, processing and understanding multi-modal sensory data. More specifically, it describes a computational method and system for how to process and remember multiple locations in multimodal sensory space (e.g., visual, auditory, somatosensory, etc.). The multimodal representation and memory is based on a biologically-inspired hierarchy of spatial representations implemented with novel analogues of real representations used in the human brain. The novelty of the work is in the computationally efficient and robust spatial representation of 3D locations in multimodal sensory space as well as an associated working memory for storage and recall of these representations at the desired level for goal-oriented action. We describe (1) A simple and efficient method for human-like hierarchical spatial representations of sensory data and how to associate, integrate and convert between these representations (head-centered coordinate system, body-centered coordinate, etc.); (2) a robust method for training and learning a mapping of points in multimodal sensory space (e.g., camera-visible object positions, location of auditory sources, etc.) to the above hierarchical spatial representations; and (3) a specification and implementation of a hierarchical spatial working memory based on the above for storage and recall at the desired level for goal-oriented action(s). This work is most useful for any machine or human-machine application that requires processing of multimodal sensory inputs, making sense of it from a spatial perspective (e.g., where is the sensory information coming from with respect to the machine and its parts) and then taking some goal-oriented action based on this spatial understanding. A multi-level spatial representation hierarchy means that heterogeneous sensory inputs (e.g., visual, auditory, somatosensory, etc.) can map onto the hierarchy at different levels. When controlling various machine

  20. 3D representation of the non-rotating origin

    NASA Astrophysics Data System (ADS)

    de Viron, O.; Dehant, V.

    2005-09-01

    In the frame of the IAU working group of Nomenclature in Fundamental Astronomy (of which one of the objectives is to make educational efforts for addressing the implementation of the IAU 2000 Resolutions for a large community of scientists), we have developed a set of didactic animation in order to give a physical understanding to the concept of non-rotating origin (NRO). In this paper, we give a short explanation on the existing animations, in order to encourage their use. A complete zip file with all the material is available on : http://danof.obspm.fr/iauWGnfa/Educational.html.

  1. 3-D Interpolation in Object Perception: Evidence from an Objective Performance Paradigm

    ERIC Educational Resources Information Center

    Kellman, Philip J.; Garrigan, Patrick; Shipley, Thomas F.; Yin, Carol; Machado, Liana

    2005-01-01

    Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D…

  2. Object Segmentation and Ground Truth in 3D Embryonic Imaging

    PubMed Central

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  3. 3D X-ray tomography to evaluate volumetric objects

    NASA Astrophysics Data System (ADS)

    de Oliveira, Luís. F.; Lopes, Ricardo T.; de Jesus, Edgar F. O.; Braz, Delson

    2003-06-01

    The 3D-CT and stereological techniques are used concomitantly. The quantitative stereology yields measurements that reflects areas, volumes, lengths, rates and frequencies of the test body. Two others quantification, connectivity and anisotropy, can be used as well to complete the analysis. In this paper, it is presented the application of 3D-CT and the stereological quantification to analyze a special kind of test body: ceramic filters which have an internal structure similar to cancellous bone. The stereology is adapted to work with the 3D nature of the tomographic data. It is presented too the results of connectivity and anisotropy.

  4. Two Eyes, 3D Early Results: Stereoscopic vs 2D Representations of Highly Spatial Scientific Imagery

    NASA Astrophysics Data System (ADS)

    Price, Aaron

    2013-06-01

    "Two Eyes, 3D" is a 3-year NSF funded research project to study the educational impacts of using stereoscopic representations in informal settings. The first study conducted as part of the project tested children 5-12 on their ability to perceive spatial elements of slides of scientific objects shown to them in either stereoscopic or 2D format. Children were also tested for prior spatial ability. Early results suggest that stereoscopy does not have a major impact on perceiving spatial elements of an image, but it does have a more significant impact on how the children apply that knowledge when presented with a common sense situation. The project is run by the AAVSO and this study was conducted at the Boston Museum of Science.

  5. Automated objective characterization of visual field defects in 3D

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang (Inventor)

    2006-01-01

    A method and apparatus for electronically performing a visual field test for a patient. A visual field test pattern is displayed to the patient on an electronic display device and the patient's responses to the visual field test pattern are recorded. A visual field representation is generated from the patient's responses. The visual field representation is then used as an input into a variety of automated diagnostic processes. In one process, the visual field representation is used to generate a statistical description of the rapidity of change of a patient's visual field at the boundary of a visual field defect. In another process, the area of a visual field defect is calculated using the visual field representation. In another process, the visual field representation is used to generate a statistical description of the volume of a patient's visual field defect.

  6. Reconstruction and 3D visualisation based on objective real 3D based documentation.

    PubMed

    Bolliger, Michael J; Buck, Ursula; Thali, Michael J; Bolliger, Stephan A

    2012-09-01

    Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (computer tomography, magnetic resonance imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image. PMID:21979427

  7. True-3D Accentuating of Grids and Streets in Urban Topographic Maps Enhances Human Object Location Memory

    PubMed Central

    Edler, Dennis; Bestgen, Anne-Kathrin; Kuchinke, Lars; Dickmann, Frank

    2015-01-01

    Cognitive representations of learned map information are subject to systematic distortion errors. Map elements that divide a map surface into regions, such as content-related linear symbols (e.g. streets, rivers, railway systems) or additional artificial layers (coordinate grids), provide an orientation pattern that can help users to reduce distortions in their mental representations. In recent years, the television industry has started to establish True-3D (autostereoscopic) displays as mass media. These modern displays make it possible to watch dynamic and static images including depth illusions without additional devices, such as 3D glasses. In these images, visual details can be distributed over different positions along the depth axis. Some empirical studies of vision research provided first evidence that 3D stereoscopic content attracts higher attention and is processed faster. So far, the impact of True-3D accentuating has not yet been explored concerning spatial memory tasks and cartography. This paper reports the results of two empirical studies that focus on investigations whether True-3D accentuating of artificial, regular overlaying line features (i.e. grids) and content-related, irregular line features (i.e. highways and main streets) in official urban topographic maps (scale 1/10,000) further improves human object location memory performance. The memory performance is measured as both the percentage of correctly recalled object locations (hit rate) and the mean distances of correctly recalled objects (spatial accuracy). It is shown that the True-3D accentuating of grids (depth offset: 5 cm) significantly enhances the spatial accuracy of recalled map object locations, whereas the True-3D emphasis of streets significantly improves the hit rate of recalled map object locations. These results show the potential of True-3D displays for an improvement of the cognitive representation of learned cartographic information. PMID:25679208

  8. True-3D accentuating of grids and streets in urban topographic maps enhances human object location memory.

    PubMed

    Edler, Dennis; Bestgen, Anne-Kathrin; Kuchinke, Lars; Dickmann, Frank

    2015-01-01

    Cognitive representations of learned map information are subject to systematic distortion errors. Map elements that divide a map surface into regions, such as content-related linear symbols (e.g. streets, rivers, railway systems) or additional artificial layers (coordinate grids), provide an orientation pattern that can help users to reduce distortions in their mental representations. In recent years, the television industry has started to establish True-3D (autostereoscopic) displays as mass media. These modern displays make it possible to watch dynamic and static images including depth illusions without additional devices, such as 3D glasses. In these images, visual details can be distributed over different positions along the depth axis. Some empirical studies of vision research provided first evidence that 3D stereoscopic content attracts higher attention and is processed faster. So far, the impact of True-3D accentuating has not yet been explored concerning spatial memory tasks and cartography. This paper reports the results of two empirical studies that focus on investigations whether True-3D accentuating of artificial, regular overlaying line features (i.e. grids) and content-related, irregular line features (i.e. highways and main streets) in official urban topographic maps (scale 1/10,000) further improves human object location memory performance. The memory performance is measured as both the percentage of correctly recalled object locations (hit rate) and the mean distances of correctly recalled objects (spatial accuracy). It is shown that the True-3D accentuating of grids (depth offset: 5 cm) significantly enhances the spatial accuracy of recalled map object locations, whereas the True-3D emphasis of streets significantly improves the hit rate of recalled map object locations. These results show the potential of True-3D displays for an improvement of the cognitive representation of learned cartographic information. PMID:25679208

  9. Color and size interactions in a real 3D object similarity task.

    PubMed

    Ling, Yazhu; Hurlbert, Anya

    2004-08-31

    In the natural world, objects are characterized by a variety of attributes, including color and shape. The contributions of these two attributes to object recognition are typically studied independently of each other, yet they are likely to interact in natural tasks. Here we examine whether color and size (a component of shape) interact in a real three-dimensional (3D) object similarity task, using solid domelike objects whose distinct apparent surface colors are independently controlled via spatially restricted illumination from a data projector hidden to the observer. The novel experimental setup preserves natural cues to 3D shape from shading, binocular disparity, motion parallax, and surface texture cues, while also providing the flexibility and ease of computer control. Observers performed three distinct tasks: two unimodal discrimination tasks, and an object similarity task. Depending on the task, the observer was instructed to select the indicated alternative object which was "bigger than," "the same color as," or "most similar to" the designated reference object, all of which varied in both size and color between trials. For both unimodal discrimination tasks, discrimination thresholds for the tested attribute (e.g., color) were increased by differences in the secondary attribute (e.g., size), although this effect was more robust in the color task. For the unimodal size-discrimination task, the strongest effects of the secondary attribute (color) occurred as a perceptual bias, which we call the "saturation-size effect": Objects with more saturated colors appear larger than objects with less saturated colors. In the object similarity task, discrimination thresholds for color or size differences were significantly larger than in the unimodal discrimination tasks. We conclude that color and size interact in determining object similarity, and are effectively analyzed on a coarser scale, due to noise in the similarity estimates of the individual attributes

  10. PGD and separated space variables representation for linear elasticity in 3D representation of plate domains

    NASA Astrophysics Data System (ADS)

    Bognet, B.; Leygue, A.; Chinesta, F.; Poitou, A.

    2011-01-01

    In this paper, we focus on the simulation of linear elastic behaviour of plates using a 3D approach which numerical cost only scales like a 2D one. In the case of plates, the kinematic hypothesis introduced in plate theories to go from 3D to 2D is usually unsatisfactory where one cannot rely on St Venant's principle (usually close to the plate edges). We propose to apply the PGD (Proper Generalized Decomposition) method [1] to the simulation of the linear elastic behavior of plates. This method allows us to separately search for the in-plane and the out-of plane contributions to the 3D solution, yielding significant savings in computational cost. The method is validated on a simple case and its full potential is then presented for the simulation of the behavior of laminated composite plates.

  11. Parts, Cavities, and Object Representation in Infancy

    ERIC Educational Resources Information Center

    Hayden, Angela; Bhatt, Ramesh S.; Kangas, Ashley; Zieber, Nicole

    2011-01-01

    Part representation is not only critical to object perception but also plays a key role in a number of basic visual cognition functions, such as figure-ground segregation, allocation of attention, and memory for shapes. Yet, virtually nothing is known about the development of part representation. If parts are fundamental components of object shape…

  12. Combining depth and color data for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Joergensen, Thomas M.; Linneberg, Christian; Andersen, Allan W.

    1997-09-01

    This paper describes the shape recognition system that has been developed within the ESPRIT project 9052 ADAS on automatic disassembly of TV-sets using a robot cell. Depth data from a chirped laser radar are fused with color data from a video camera. The sensor data is pre-processed in several ways and the obtained representation is used to train a RAM neural network (memory based reasoning approach) to detect different components within TV-sets. The shape recognizing architecture has been implemented and tested in a demonstration setup.

  13. 3D Face Recognition Based on Multiple Keypoint Descriptors and Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  14. 3D face recognition based on multiple keypoint descriptors and sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  15. A Taxonomy of 3D Occluded Objects Recognition Techniques

    NASA Astrophysics Data System (ADS)

    Soleimanizadeh, Shiva; Mohamad, Dzulkifli; Saba, Tanzila; Al-ghamdi, Jarallah Saleh

    2016-03-01

    The overall performances of object recognition techniques under different condition (e.g., occlusion, viewpoint, and illumination) have been improved significantly in recent years. New applications and hardware are shifted towards digital photography, and digital media. This faces an increase in Internet usage requiring object recognition for certain applications; particularly occulded objects. However occlusion is still an issue unhandled, interlacing the relations between extracted feature points through image, research is going on to develop efficient techniques and easy to use algorithms that would help users to source images; this need to overcome problems and issues regarding occlusion. The aim of this research is to review recognition occluded objects algorithms and figure out their pros and cons to solve the occlusion problem features, which are extracted from occluded object to distinguish objects from other co-existing objects by determining the new techniques, which could differentiate the occluded fragment and sections inside an image.

  16. An optimal sensing strategy for recognition and localization of 3-D natural quadric objects

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan; Hahn, Hernsoo

    1991-01-01

    An optimal sensing strategy for an optical proximity sensor system engaged in the recognition and localization of 3-D natural quadric objects is presented. The optimal sensing strategy consists of the selection of an optimal beam orientation and the determination of an optimal probing plane that compose an optimal data collection operation known as an optimal probing. The decision of an optimal probing is based on the measure of discrimination power of a cluster of surfaces on a multiple interpretation image (MII), where the measure of discrimination power is defined in terms of a utility function computing the expected number of interpretations that can be pruned out by a probing. An object representation suitable for active sensing based on a surface description vector (SDV) distribution graph and hierarchical tables is presented. Experimental results are shown.

  17. Profile of students' comprehension of 3D molecule representation and its interconversion on chirality

    NASA Astrophysics Data System (ADS)

    Setyarini, M.; Liliasari, Kadarohman, Asep; Martoprawiro, Muhamad A.

    2016-02-01

    This study aims at describing (1) students' level comprehension; (2) factors causing difficulties to 3D comprehend molecule representation and its interconversion on chirality. Data was collected using multiple-choice test consisting of eight questions. The participants were required to give answers along with their reasoning. The test was developed based on the indicators of concept comprehension. The study was conducted to 161 college students enrolled in stereochemistry topic in the odd semester (2014/2015) from two LPTK (teacher training institutes) in Bandar Lampung and Gorontalo, and one public university in Bandung. The result indicates that college students' level of comprehension towards 3D molecule representations and its inter-conversion was 5% on high level, 22 % on the moderate level, and 73 % on the low level. The dominant factors identified as the cause of difficulties to comprehend 3D molecule representation and its interconversion were (i) the lack of spatial awareness, (ii) violation of absolute configuration determination rules, (iii) imprecise placement of observers, (iv) the lack of rotation operation, and (v) the lack of understanding of correlation between the representations. This study recommends that learning show more rigorous spatial awareness training tasks accompanied using dynamic visualization media of molecules associated. Also students learned using static molecular models can help them overcome their difficulties encountered.

  18. Interrupting Infants' Persisting Object Representations: An Object-Based Limit?

    ERIC Educational Resources Information Center

    Cheries, Erik W.; Wynn, Karen; Scholl, Brian J.

    2006-01-01

    Making sense of the visual world requires keeping track of objects as the same persisting individuals over time and occlusion. Here we implement a new paradigm using 10-month-old infants to explore the processes and representations that support this ability in two ways. First, we demonstrate that persisting object representations can be maintained…

  19. Frio, Yegua objectives of E. Texas 3D seismic

    SciTech Connect

    1996-07-01

    Houston companies plan to explore deeper formations along the Sabine River on the Texas and Louisiana Gulf Coast. PetroGuard Co. Inc. and Jebco Seismic Inc., Houston, jointly secured a seismic and leasing option from Hankamer family et al. on about 120 sq miles in Newton County, Tex., and Calcasieu Parish, La. PetroGuard, which specializes in oilfield rehabilitation, has production experience in the area. Historic production in the area spans three major geologic trends: Oligocene Frio/Hackberry, downdip and mid-dip Eocene Yegua, and Eocene Wilcox. In the southern part of the area, to be explored first, the trends lie at 9,000--10,000 ft, 10,000--12,000 ft, and 14,000--15,000 ft, respectively. Output Exploration Co., an affiliate of Input/Output Inc., Houston, acquired from PetroGuard and Jebco all exploratory drilling rights in the option area. Output will conduct 3D seismic operations over nearly half the acreage this summer. Data acquisition started late this spring. Output plans to use a combination of a traditional land recording system and I/O`s new RSR 24 bit radio telemetry system because the area spans environments from dry land to swamp.

  20. Scale Space Graph Representation and Kernel Matching for Non Rigid and Textured 3D Shape Retrieval.

    PubMed

    Garro, Valeria; Giachetti, Andrea

    2016-06-01

    In this paper we introduce a novel framework for 3D object retrieval that relies on tree-based shape representations (TreeSha) derived from the analysis of the scale-space of the Auto Diffusion Function (ADF) and on specialized graph kernels designed for their comparison. By coupling maxima of the Auto Diffusion Function with the related basins of attraction, we can link the information at different scales encoding spatial relationships in a graph description that is isometry invariant and can easily incorporate texture and additional geometrical information as node and edge features. Using custom graph kernels it is then possible to estimate shape dissimilarities adapted to different specific tasks and on different categories of models, making the procedure a powerful and flexible tool for shape recognition and retrieval. Experimental results demonstrate that the method can provide retrieval scores similar or better than state-of-the-art on textured and non textured shape retrieval benchmarks and give interesting insights on effectiveness of different shape descriptors and graph kernels. PMID:26372206

  1. Prototyping a Sensor Enabled 3d Citymodel on Geospatial Managed Objects

    NASA Astrophysics Data System (ADS)

    Kjems, E.; Kolář, J.

    2013-09-01

    One of the major development efforts within the GI Science domain are pointing at sensor based information and the usage of real time information coming from geographic referenced features in general. At the same time 3D City models are mostly justified as being objects for visualization purposes rather than constituting the foundation of a geographic data representation of the world. The combination of 3D city models and real time information based systems though can provide a whole new setup for data fusion within an urban environment and provide time critical information preserving our limited resources in the most sustainable way. Using 3D models with consistent object definitions give us the possibility to avoid troublesome abstractions of reality, and design even complex urban systems fusing information from various sources of data. These systems are difficult to design with the traditional software development approach based on major software packages and traditional data exchange. The data stream is varying from urban domain to urban domain and from system to system why it is almost impossible to design a complete system taking care of all thinkable instances now and in the future within one constraint software design complex. On several occasions we have been advocating for a new end advanced formulation of real world features using the concept of Geospatial Managed Objects (GMO). This paper presents the outcome of the InfraWorld project, a 4 million Euro project financed primarily by the Norwegian Research Council where the concept of GMO's have been applied in various situations on various running platforms of an urban system. The paper will be focusing on user experiences and interfaces rather then core technical and developmental issues. The project was primarily focusing on prototyping rather than realistic implementations although the results concerning applicability are quite clear.

  2. Segmentation of Blood Vessels and 3D Representation of CMR Image

    NASA Astrophysics Data System (ADS)

    Jiji, G. W.

    2013-06-01

    Current cardiac magnetic resonance imaging (CMR) technology allows the determination of patient-individual coronary tree structure, detection of infarctions, and assessment of myocardial perfusion. The purpose of this work is to segment heart blood vessels and visualize it in 3D. In this work, 3D visualisation of vessel was performed into four phases. The first step is to detect the tubular structures using multiscale medialness function, which distinguishes tube-like structures from and other structures. Second step is to extract the centrelines of the tubes. From the centreline radius the cylindrical tube model is constructed. The third step is segmentation of the tubular structures. The cylindrical tube model is used in segmentation process. Fourth step is to 3D representation of the tubular structure using Volume . The proposed approach is applied to 10 datasets of patients from the clinical routine and tested the results with radiologists.

  3. Recognition of Simple 3D Geometrical Objects under Partial Occlusion

    NASA Astrophysics Data System (ADS)

    Barchunova, Alexandra; Sommer, Gerald

    In this paper we present a novel procedure for contour-based recognition of partially occluded three-dimensional objects. In our approach we use images of real and rendered objects whose contours have been deformed by a restricted change of the viewpoint. The preparatory part consists of contour extraction, preprocessing, local structure analysis and feature extraction. The main part deals with an extended construction and functionality of the classifier ensemble Adaptive Occlusion Classifier (AOC). It relies on a hierarchical fragmenting algorithm to perform a local structure analysis which is essential when dealing with occlusions. In the experimental part of this paper we present classification results for five classes of simple geometrical figures: prism, cylinder, half cylinder, a cube, and a bridge. We compare classification results for three classical feature extractors: Fourier descriptors, pseudo Zernike and Zernike moments.

  4. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    PubMed Central

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-01-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images. PMID:26980176

  5. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    NASA Astrophysics Data System (ADS)

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  6. 3D representations of amino acids—applications to protein sequence comparison and classification

    PubMed Central

    Li, Jie; Koehl, Patrice

    2014-01-01

    The amino acid sequence of a protein is the key to understanding its structure and ultimately its function in the cell. This paper addresses the fundamental issue of encoding amino acids in ways that the representation of such a protein sequence facilitates the decoding of its information content. We show that a feature-based representation in a three-dimensional (3D) space derived from amino acid substitution matrices provides an adequate representation that can be used for direct comparison of protein sequences based on geometry. We measure the performance of such a representation in the context of the protein structural fold prediction problem. We compare the results of classifying different sets of proteins belonging to distinct structural folds against classifications of the same proteins obtained from sequence alone or directly from structural information. We find that sequence alone performs poorly as a structure classifier. We show in contrast that the use of the three dimensional representation of the sequences significantly improves the classification accuracy. We conclude with a discussion of the current limitations of such a representation and with a description of potential improvements. PMID:25379143

  7. 3D Palmprint Identification Using Block-Wise Features and Collaborative Representation.

    PubMed

    Zhang, Lin; Shen, Ying; Li, Hongyu; Lu, Jianwei

    2015-08-01

    Developing 3D palmprint recognition systems has recently begun to draw attention of researchers. Compared with its 2D counterpart, 3D palmprint has several unique merits. However, most of the existing 3D palmprint matching methods are designed for one-to-one verification and they are not efficient to cope with the one-to-many identification case. In this paper, we fill this gap by proposing a collaborative representation (CR) based framework with l1-norm or l2-norm regularizations for 3D palmprint identification. The effects of different regularization terms have been evaluated in experiments. To use the CR-based classification framework, one key issue is how to extract feature vectors. To this end, we propose a block-wise statistics based feature extraction scheme. We divide a 3D palmprint ROI into uniform blocks and extract a histogram of surface types from each block; histograms from all blocks are then concatenated to form a feature vector. Such feature vectors are highly discriminative and are robust to mere misalignment. Experiments demonstrate that the proposed CR-based framework with an l2-norm regularization term can achieve much better recognition accuracy than the other methods. More importantly, its computational complexity is extremely low, making it quite suitable for the large-scale identification application. Source codes are available at http://sse.tongji.edu.cn/linzhang/cr3dpalm/cr3dpalm.htm. PMID:26353008

  8. Video reframing relying on panoramic estimation based on a 3D representation of the scene

    NASA Astrophysics Data System (ADS)

    de Simon, Agnes; Figue, Jean; Nicolas, Henri

    2000-05-01

    This paper describes a new method for creating mosaic images from an original video and for computing a new sequence modifying some camera parameters like image size, scale factor, view angle... A mosaic image is a representation of the full scene observed by a moving camera during its displacement. It provides a wide angle of view of the scene from a sequence of images shot with a narrow angle of view camera. This paper proposes a method to create a virtual sequence from a calibrated original video and a rough 3D model of the scene. A 3D relationship between original and virtual images gives pixel correspondent in different images for a same 3D point in model scene. To texture the model with natural textures obtained in the original sequence, a criterion based on constraints related to the temporal variations of the background and 3D geometric considerations is used. Finally, in the presented method, the textured 3D model is used to recompute a new sequence of image with possibly different point of view and camera aperture angle. The algorithm is being proven with virtual sequences and, obtained results are encouraging up to now.

  9. Combining laser scan and photogrammetry for 3D object modeling using a single digital camera

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Zhang, Hong; Zhang, Xiangwei

    2009-07-01

    In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Laser scan and photogrammetry are two main methods to be used. For laser scan, a video camera and a laser source are necessary, and for photogrammetry, a digital still camera with high resolution pixels is indispensable. In some 3D modeling tasks, two methods are often integrated to get satisfactory results. Although many research works have been done on how to combine the results of the two methods, no work has been reported to design an integrated device at low cost. In this paper, a new 3D scan system combining laser scan and photogrammetry using a single consumer digital camera is proposed. Nowadays there are many consumer digital cameras, such as Canon EOS 5D Mark II, they usually have features of more than 10M pixels still photo recording and full 1080p HD movie recording, so a integrated scan system can be designed using such a camera. A square plate glued with coded marks is used to place the 3d objects, and two straight wood rulers also glued with coded marks can be laid on the plate freely. In the photogrammetry module, the coded marks on the plate make up a world coordinate and can be used as control network to calibrate the camera, and the planes of two rulers can also be determined. The feature points of the object and the rough volume representation from the silhouettes can be obtained in this module. In the laser scan module, a hand-held line laser is used to scan the object, and the two straight rulers are used as reference planes to determine the position of the laser. The laser scan results in dense points cloud which can be aligned together automatically through calibrated camera parameters. The final complete digital model is obtained through a new a patchwise energy functional method by fusion of the feature points, rough volume and the dense points cloud. The design

  10. Reducing Non-Uniqueness in Satellite Gravity Inversion using 3D Object Oriented Image Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Fadel, I.; van der Meijde, M.; Kerle, N.

    2013-12-01

    Non-uniqueness of satellite gravity interpretation has been usually reduced by using a priori information from various sources, e.g. seismic tomography models. The reduction in non-uniqueness has been based on velocity-density conversion formulas or user interpretation for 3D subsurface structures (objects) in seismic tomography models. However, these processes introduce additional uncertainty through the conversion relations due to the dependency on the other physical parameters such as temperature and pressure, or through the bias in the interpretation due to user choices and experience. In this research, a new methodology is introduced to extract the 3D subsurface structures from 3D geophysical data using a state-of-art 3D Object Oriented Image Analysis (OOA) technique. 3D OOA is tested using a set of synthetic models that simulate the real situation in the study area of this research. Then, 3D OOA is used to extract 3D subsurface objects from a real 3D seismic tomography model. The extracted 3D objects are used to reconstruct a forward model and its response is compared with the measured satellite gravity. Finally, the result of the forward modelling, based on the extracted 3D objects, is used to constrain the inversion process of satellite gravity data. Through this work, a new object-based approach is introduced to interpret and extract the 3D subsurface objects from 3D geophysical data. This can be used to constrain modelling and inversion of potential field data using the extracted 3D subsurface structures from other methods. In summary, a new approach is introduced to constrain inversion of satellite gravity measurements and enhance interpretation capabilities.

  11. OB3D, a new set of 3D objects available for research: a web-based study

    PubMed Central

    Buffat, Stéphane; Chastres, Véronique; Bichot, Alain; Rider, Delphine; Benmussa, Frédéric; Lorenceau, Jean

    2014-01-01

    Studying object recognition is central to fundamental and clinical research on cognitive functions but suffers from the limitations of the available sets that cannot always be modified and adapted to meet the specific goals of each study. We here present a new set of 3D scans of real objects available on-line as ASCII files, OB3D. These files are lists of dots, each defined by a triplet of spatial coordinates and their normal that allow simple and highly versatile transformations and adaptations. We performed a web-based experiment to evaluate the minimal number of dots required for the denomination and categorization of these objects, thus providing a reference threshold. We further analyze several other variables derived from this data set, such as the correlations with object complexity. This new stimulus set, which was found to activate the Lower Occipital Complex (LOC) in another study, may be of interest for studies of cognitive functions in healthy participants and patients with cognitive impairments, including visual perception, language, memory, etc. PMID:25339920

  12. 3D imaging of amplitude objects embedded in phase objects using transport of intensity

    NASA Astrophysics Data System (ADS)

    Banerjee, Partha; Basunia, Mahmudunnabi

    2015-09-01

    The amplitude and phase of the complex optical field in the Helmholtz equation obey a pair of coupled equations, arising from equating the real and imaginary parts. The imaginary part yields the transport of intensity equation (TIE), which can be used to derive the phase distribution at the observation plane. If a phase object is approximately imaged on the recording plane(s), TIE yields the phase without the need for phase unwrapping. In our experiment, the 3D image of a phase object and an amplitude object embedded in a phase object is recovered. The phase object is created by heating a liquid, comprising a solution of red dye in alcohol, using a focused 514 nm laser beam to the point where self-phase modulation of the beam is observed. The optical intensities are recorded at various planes during propagation of a low power 633 nm laser beam through the liquid. In the process of applying TIE to derive the phase at the observation plane, the real part of the complex equation is also examined as a cross-check of our calculations. For pure phase objects, it is shown that the real part of the complex equation is best satisfied around the image plane. Alternatively, it is proposed that this information can be used to determine the optimum image plane.

  13. The Representation of Cultural Heritage from Traditional Drawing to 3d Survey: the Case Study of Casamary's Abbey

    NASA Astrophysics Data System (ADS)

    Canciani, M.; Saccone, M.

    2016-06-01

    In 3D survey the aspects most discussed in the scientific community are those related to the acquisition of data from integrated survey (laser scanner, photogrammetric, topographic and traditional direct), rather than those relating to the interpretation of the data. Yet in the methods of traditional representation, the data interpretation, such as that of the philological reconstruction, constitutes the most important aspect. It is therefore essential in modern systems of survey and representation, filter the information acquired. In the system, based on the integrated survey that we have adopted, the 3D object, characterized by a cloud of georeferenced points, defined but their color values, defines the core of the elaboration. It allows to carry out targeted analysis, using section planes as a tool of selection and filtering data, comparable with those of traditional drawings. In the case study of the Abbey of Casamari (Veroli), one of the most important Cistercian Settlement in Italy, the survey made for an Agreement with the Ministry of Cultural Heritage and Activities and Tourism (MiBACT) and University of RomaTre, within the project "Accessment of the sismic safety of the state museum", the reference 3D model, consisting of the superposition and geo-references data from various surveys, is the tool with which yo develop representative models comparable to traditional ones. It provides the necessary spatial environment for drawing up plans and sections with a definition such as to develop thematic analysis related to phases of construction, state of deterioration and structural features.

  14. Whole versus Part Presentations of the Interactive 3D Graphics Learning Objects

    ERIC Educational Resources Information Center

    Azmy, Nabil Gad; Ismaeel, Dina Ahmed

    2010-01-01

    The purpose of this study is to present an analysis of how the structure and design of the Interactive 3D Graphics Learning Objects can be effective and efficient in terms of Performance, Time on task, and Learning Efficiency. The study explored two treatments, namely whole versus Part Presentations of the Interactive 3D Graphics Learning Objects,…

  15. Detection and 3D representation of pulmonary air bubbles in HRCT volumes

    NASA Astrophysics Data System (ADS)

    Silva, Jose S.; Silva, Augusto F.; Santos, Beatriz S.; Madeira, Joaquim

    2003-05-01

    Bubble emphysema is a disease characterized by the presence of air bubbles within the lungs. With the purpose of identifying pulmonary air bubbles, two alternative methods were developed, using High Resolution Computer Tomography (HRCT) exams. The search volume is confined to the pulmonary volume through a previously developed pulmonary contour detection algorithm. The first detection method follows a slice by slice approach and uses selection criteria based on the Hounsfield levels, dimensions, shape and localization of the bubbles. Candidate regions that do not exhibit axial coherence along at least two sections are excluded. Intermediate sections are interpolated for a more realistic representation of lungs and bubbles. The second detection method, after the pulmonary volume delimitation, follows a fully 3D approach. A global threshold is applied to the entire lung volume returning candidate regions. 3D morphologic operators are used to remove spurious structures and to circumscribe the bubbles. Bubble representation is accomplished by two alternative methods. The first generates bubble surfaces based on the voxel volumes previously detected; the second method assumes that bubbles are approximately spherical. In order to obtain better 3D representations, fits super-quadrics to bubble volume. The fitting process is based on non-linear least squares optimization method, where a super-quadric is adapted to a regular grid of points defined on each bubble. All methods were applied to real and semi-synthetical data where artificial and randomly deformed bubbles were embedded in the interior of healthy lungs. Quantitative results regarding bubble geometric features are either similar to a priori known values used in simulation tests, or indicate clinically acceptable dimensions and locations when dealing with real data.

  16. 3D object-oriented image analysis in 3D geophysical modelling: Analysing the central part of the East African Rift System

    NASA Astrophysics Data System (ADS)

    Fadel, I.; van der Meijde, M.; Kerle, N.; Lauritsen, N.

    2015-03-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D interactive modelling environment IGMAS+, and their density contrast values were calculated using an object-based inversion technique to calculate the forward signal of the objects and compare it with the measured satellite gravity. Thus, a new object-based approach was implemented to interpret and extract the 3D subsurface objects from 3D geophysical data. We also introduce a new approach to constrain the interpretation of the satellite gravity measurements that can be applied using any 3D geophysical model.

  17. An Overview of 3d Topology for Ladm-Based Objects

    NASA Astrophysics Data System (ADS)

    Zulkifli, N. A.; Rahman, A. A.; van Oosterom, P.

    2015-10-01

    This paper reviews 3D topology within Land Administration Domain Model (LADM) international standard. It is important to review characteristic of the different 3D topological models and to choose the most suitable model for certain applications. The characteristic of the different 3D topological models are based on several main aspects (e.g. space or plane partition, used primitives, constructive rules, orientation and explicit or implicit relationships). The most suitable 3D topological model depends on the type of application it is used for. There is no single 3D topology model best suitable for all types of applications. Therefore, it is very important to define the requirements of the 3D topology model. The context of this paper is a 3D topology for LADM-based objects.

  18. 3D video analysis of the novel object recognition test in rats.

    PubMed

    Matsumoto, Jumpei; Uehara, Takashi; Urakawa, Susumu; Takamura, Yusaku; Sumiyoshi, Tomiki; Suzuki, Michio; Ono, Taketoshi; Nishijo, Hisao

    2014-10-01

    The novel object recognition (NOR) test has been widely used to test memory function. We developed a 3D computerized video analysis system that estimates nose contact with an object in Long Evans rats to analyze object exploration during NOR tests. The results indicate that the 3D system reproducibly and accurately scores the NOR test. Furthermore, the 3D system captures a 3D trajectory of the nose during object exploration, enabling detailed analyses of spatiotemporal patterns of object exploration. The 3D trajectory analysis revealed a specific pattern of object exploration in the sample phase of the NOR test: normal rats first explored the lower parts of objects and then gradually explored the upper parts. A systematic injection of MK-801 suppressed changes in these exploration patterns. The results, along with those of previous studies, suggest that the changes in the exploration patterns reflect neophobia to a novel object and/or changes from spatial learning to object learning. These results demonstrate that the 3D tracking system is useful not only for detailed scoring of animal behaviors but also for investigation of characteristic spatiotemporal patterns of object exploration. The system has the potential to facilitate future investigation of neural mechanisms underlying object exploration that result from dynamic and complex brain activity. PMID:24991752

  19. Temporal-spatial modeling of fast-moving and deforming 3D objects

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoliang; Wei, Youzhi

    1998-09-01

    This paper gives a brief description of the method and techniques developed for the modeling and reconstruction of fast moving and deforming 3D objects. A new approach using close-range digital terrestrial photogrammetry in conjunction with high speed photography and videography is proposed. A sequential image matching method (SIM) has been developed to automatically process pairs of images taken continuously of any fast moving and deforming 3D objects. Using the SIM technique a temporal-spatial model (TSM) of any fast moving and deforming 3D objects can be developed. The TSM would include a series of reconstructed surface models of the fast moving and deforming 3D object in the form of 3D images. The TSM allows the 3D objects to be visualized and analyzed in sequence. The SIM method, specifically the left-right matching and forward-back matching techniques are presented in the paper. An example is given which deals with the monitoring of a typical blast rock bench in a major open pit mine in Australia. With the SIM approach and the TSM model it is possible to automatically and efficiently reconstruct the 3D images of the blasting process. This reconstruction would otherwise be impossible to achieve using a labor intensive manual processing approach based on 2D images taken from conventional high speed cameras. The case study demonstrates the potential of the SIM approach and the TSM for the automatic identification, tracking and reconstruction of any fast moving and deforming 3D targets.

  20. 3D polygonal representation of dense point clouds by triangulation, segmentation, and texture projection

    NASA Astrophysics Data System (ADS)

    Tajbakhsh, Touraj

    2010-02-01

    A basic concern of computer graphic is the modeling and realistic representation of three-dimensional objects. In this paper we present our reconstruction framework which determines a polygonal surface from a set of dense points such those typically obtained from laser scanners. We deploy the concept of adaptive blobs to achieve a first volumetric representation of the object. In the next step we estimate a coarse surface using the marching cubes method. We propose to deploy a depth-first search segmentation algorithm traversing a graph representation of the obtained polygonal mesh in order to identify all connected components. A so called supervised triangulation maps the coarse surfaces onto the dense point cloud. We optimize the mesh topology using edge exchange operations. For photo-realistic visualization of objects we finally synthesize optimal low-loss textures from available scene captures of different projections. We evaluate our framework on artificial data as well as real sensed data.

  1. 3D shape measurements for non-diffusive objects using fringe projection techniques

    NASA Astrophysics Data System (ADS)

    Su, Wei-Hung; Tseng, Bae-Heng; Cheng, Nai-Jen

    2013-09-01

    A scanning approach using holographic techniques to perform the 3D shape measurement for a non-diffusive object is proposed. Even though the depth discontinuity on the inspected surface is pretty high, the proposed method can retrieve the 3D shape precisely.

  2. Visual Short-Term Memory Benefit for Objects on Different 3-D Surfaces

    ERIC Educational Resources Information Center

    Xu, Yaoda; Nakayama, Ken

    2007-01-01

    Visual short-term memory (VSTM) plays an important role in visual cognition. Although objects are located on different 3-dimensional (3-D) surfaces in the real world, how VSTM capacity may be influenced by the presence of multiple 3-D surfaces has never been examined. By manipulating binocular disparities of visual displays, the authors found that…

  3. GestAction3D: A Platform for Studying Displacements and Deformations of 3D Objects Using Hands

    NASA Astrophysics Data System (ADS)

    Lingrand, Diane; Renevier, Philippe; Pinna-Déry, Anne-Marie; Cremaschi, Xavier; Lion, Stevens; Rouel, Jean-Guilhem; Jeanne, David; Cuisinaud, Philippe; Soula*, Julien

    We present a low-cost hand-based device coupled with a 3D motion recovery engine and 3D visualization. This platform aims at studying ergonomic 3D interactions in order to manipulate and deform 3D models by interacting with hands on 3D meshes. Deformations are done using different modes of interaction that we will detail in the paper. Finger extremities are attached to vertices, edges or facets. Switching from one mode to another or changing the point of view is done using gestures. The determination of the more adequate gestures is part of the work

  4. Electro-holography display using computer generated hologram of 3D objects based on projection spectra

    NASA Astrophysics Data System (ADS)

    Huang, Sujuan; Wang, Duocheng; He, Chao

    2012-11-01

    A new method of synthesizing computer-generated hologram of three-dimensional (3D) objects is proposed from their projection images. A series of projection images of 3D objects are recorded with one-dimensional azimuth scanning. According to the principles of paraboloid of revolution in 3D Fourier space and 3D central slice theorem, spectra information of 3D objects can be gathered from their projection images. Considering quantization error of horizontal and vertical directions, the spectrum information from each projection image is efficiently extracted in double circle and four circles shape, to enhance the utilization of projection spectra. Then spectra information of 3D objects from all projection images is encoded into computer-generated hologram based on Fourier transform using conjugate-symmetric extension. The hologram includes 3D information of objects. Experimental results for numerical reconstruction of the CGH at different distance validate the proposed methods and show its good performance. Electro-holographic reconstruction can be realized by using an electronic addressing reflective liquid-crystal display (LCD) spatial light modulator. The CGH from the computer is loaded onto the LCD. By illuminating a reference light from a laser source to the LCD, the amplitude and phase information included in the CGH will be reconstructed due to the diffraction of the light modulated by the LCD.

  5. Separating the Representation from the Science: Training Students in Comprehending 3D Diagrams

    NASA Astrophysics Data System (ADS)

    Bemis, K. G.; Silver, D.; Chiang, J.; Halpern, D.; Oh, K.; Tremaine, M.

    2011-12-01

    Studies of students taking first year geology and earth science courses at universities find that a remarkable number of them are confused by the three-dimensional representations used to explain the science [1]. Comprehension of these 3D representations has been found to be related to an individual's spatial ability [2]. A variety of interactive programs and animations have been created to help explain the diagrams to beginning students [3, 4]. This work has demonstrated comprehension improvement and removed a gender gap between male (high spatial) and female (low spatial) students [5]. However, not much research has examined what makes the 3D diagrams so hard to understand or attempted to build a theory for creating training designed to remove these difficulties. Our work has separated the science labeling and comprehension of the diagrams from the visualizations to examine how individuals mentally see the visualizations alone. In particular, we asked subjects to create a cross-sectional drawing of the internal structure of various 3D diagrams. We found that viewing planes (the coordinate system the designer applies to the diagram), cutting planes (the planes formed by the requested cross sections) and visual property planes (the planes formed by the prominent features of the diagram, e.g., a layer at an angle of 30 degrees to the top surface of the diagram) that deviated from a Cartesian coordinate system imposed by the viewer caused significant problems for subjects, in part because these deviations forced them to mentally re-orient their viewing perspective. Problems with deviations in all three types of plane were significantly harder than those deviating on one or two planes. Our results suggest training that does not focus on showing how the components of various 3D geologic formations are put together but rather training that guides students in re-orienting themselves to deviations that differ from their right-angle view of the world, e.g., by showing how

  6. Neuronal Representation of 3-D Space in the Primary Visual Cortex and Control of Eye Movements.

    PubMed

    Alekseenko, Svetlana V

    2015-01-01

    The aim of this article is to consider the correlations between the structure of the primary visual cortical area V1 and control of coordinated movements of the two eyes. Using the anatomical data available, a schematic map of 3-D space representation in the layer IV of area V1 containing only monocular cells has been constructed. The analysis of this map revealed that binocular neurons of V1, which are formed by convergence of monocular cells, should encode the absolute disparity. Participation of monocular and binocular neurons of V1 in the control of convergence, divergence, and version eye movements is discussed. It is proposed that synchronous contraction of corresponding extraocular muscles of both eyes for vergence might be ensured by duplicated transmission of information from the central part of retina to visual cortex of both hemispheres. PMID:26562914

  7. Development of goniophotometric imaging system for recording reflectance spectra of 3D objects

    NASA Astrophysics Data System (ADS)

    Tonsho, Kazutaka; Akao, Y.; Tsumura, Norimichi; Miyake, Yoichi

    2001-12-01

    In recent years, it is required to develop a system for 3D capture of archives in museums and galleries. In visualizing of 3D object, it is important to reproduce both color and glossiness accurately. Our final goal is to construct digital archival systems in museum and internet or virtual museum via World Wide Web. To achieve our goal, we have developed gonio-photometric imaging system by using high accurate multi-spectral camera and 3D digitizer. In this paper, gonio-photometric imaging method is introduced for recording 3D object. 5-bands images of the object are taken under 7 different illuminants angles. The 5-band image sequences are then analyzed on the basis of both dichromatic reflection model and Phong model to extract gonio-photometric property of the object. The images of the 3D object under illuminants with arbitrary spectral radiant distribution, illuminating angles, and visual points are rendered by using OpenGL with the 3D shape and gonio-photometric property.

  8. Average Cross-Sectional Area of DebriSat Fragments Using Volumetrically Constructed 3D Representations

    NASA Technical Reports Server (NTRS)

    Scruggs, T.; Moraguez, M.; Patankar, K.; Fitz-Coy, N.; Liou, J.-C.; Sorge, M.; Huynh, T.

    2016-01-01

    Debris fragments from the hypervelocity impact testing of DebriSat are being collected and characterized for use in updating existing satellite breakup models. One of the key parameters utilized in these models is the ballistic coefficient of the fragment which is directly related to its area-to-mass ratio. However, since the attitude of fragments varies during their orbital lifetime, it is customary to use the average cross-sectional area in the calculation of the area-to-mass ratio. The average cross-sectional area is defined as the average of the projected surface areas perpendicular to the direction of motion and has been shown to be equal to one-fourth of the total surface area of a convex object. Unfortunately, numerous fragments obtained from the DebriSat experiment show significant concavity (i.e., shadowing) and thus we have explored alternate methods for computing the average cross-sectional area of the fragments. An imaging system based on the volumetric reconstruction of a 3D object from multiple 2D photographs of the object was developed for use in determining the size characteristic (i.e., characteristics length) of the DebriSat fragments. For each fragment, the imaging system generates N number of images from varied azimuth and elevation angles and processes them using a space-carving algorithm to construct a 3D point cloud of the fragment. This paper describes two approaches for calculating the average cross-sectional area of debris fragments based on the 3D imager. Approach A utilizes the constructed 3D object to generate equally distributed cross-sectional area projections and then averages them to determine the average cross-sectional area. Approach B utilizes a weighted average of the area of the 2D photographs to directly compute the average cross-sectional area. A comparison of the accuracy and computational needs of each approach is described as well as preliminary results of an analysis to determine the "optimal" number of images needed for

  9. A generic algorithm for constructing hierarchical representations of geometric objects

    SciTech Connect

    Xavier, P.G.

    1995-10-01

    For a number of years, robotics researchers have exploited hierarchical representations of geometrical objects and scenes in motion-planning, collision-avoidance, and simulation. However, few general techniques exist for automatically constructing them. We present a generic, bottom-up algorithm that uses a heuristic clustering technique to produced balanced, coherent hierarchies. Its worst-case running time is O(N{sup 2}logN), but for non-pathological cases it is O(NlogN), where N is the number of input primitives. We have completed a preliminary C++ implementation for input collections of 3D convex polygons and 3D convex polyhedra and conducted simple experiments with scenes of up to 12,000 polygons, which take only a few minutes to process. We present examples using spheres and convex hulls as hierarchy primitives.

  10. 3D models automatic reconstruction of selected close range objects. (Polish Title: Automatyczna rekonstrukcja modeli 3D małych obiektów bliskiego zasiegu)

    NASA Astrophysics Data System (ADS)

    Zaweiska, D.

    2013-12-01

    Reconstruction of three-dimensional, realistic models of objects from digital images has been the topic of research in many areas of science for many years. This development is stimulated by new technologies and tools, which appeared recently, such as digital photography, laser scanners, increase in the equipment efficiency and Internet. The objective of this paper is to present results of automatic modeling of selected close range objects, with the use of digital photographs acquired by the Hasselblad H4D50 camera. The author's software tool was utilized for calculations; it performs successive stages of the 3D model creation. The modeling process was presented as the complete process which starts from acquisition of images and which is completed by creation of a photorealistic 3D model in the same software environment. Experiments were performed for selected close range objects, with appropriately arranged image geometry, creating a ring around the measured object. The Area Base Matching (CC/LSM) method, the RANSAC algorithm, with the use of tensor calculus, were utilized form automatic matching of points detected with the SUSAN algorithm. Reconstruction of the surface of model generation is one of the important stages of 3D modeling. Reconstruction of precise surfaces, performed on the basis of a non-organized cloud of points, acquired from automatic processing of digital images, is a difficult task, which has not been finally solved. Creation of poly-angular models, which may meet high requirements concerning modeling and visualization is required in many applications. The polynomial method is usually the best way to precise representation of measurement results, and, at the same time, to achieving the optimum description of the surface. Three algorithm were tested: the volumetric method (VCG), the Poisson method and the Ball pivoting method. Those methods are mostly applied to modeling of uniform grids of points. Results of experiments proved that incorrect

  11. Viewpoint independent representation and recognition of polygonal faced in 3-D

    SciTech Connect

    Bunke, H.; Glauser, T.

    1993-08-01

    The recognition of polygons in 3-D space is an important task in robot vision. Two particular problems are addressed in the paper. First a new set of local shape descriptors for polygons are proposed that are invariant under affine transformation. Furthermore, they are complete in the sense that they allow the reconstruction of any polygon in 3-D space from three consecutive vertices. The second problem discussed in this paper is the recognition of 2-D polygonal objects under affine transformation and the presence of partial occlusion. A recognition procedure that is based on the matching of edge length ratios is introduced using a simplified version of the standard dynamic programming procedure commonly employed for string matching. The algorithm is conceptually very simple, easy to implement and has a low computational complexity. It will be shown in a set of experiments that the method is reliable and robust.

  12. Improving low-dose cardiac CT images using 3D sparse representation based processing

    NASA Astrophysics Data System (ADS)

    Shi, Luyao; Chen, Yang; Luo, Limin

    2015-03-01

    Cardiac computed tomography (CCT) has been widely used in diagnoses of coronary artery diseases due to the continuously improving temporal and spatial resolution. When helical CT with a lower pitch scanning mode is used, the effective radiation dose can be significant when compared to other radiological exams. Many methods have been developed to reduce radiation dose in coronary CT exams including high pitch scans using dual source CT scanners and step-and-shot scanning mode for both single source and dual source CT scanners. Additionally, software methods have also been proposed to reduce noise in the reconstructed CT images and thus offering the opportunity to reduce radiation dose while maintaining the desired diagnostic performance of a certain imaging task. In this paper, we propose that low-dose scans should be considered in order to avoid the harm from accumulating unnecessary X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. Accordingly, in this paper, a 3D dictionary representation based image processing method is proposed to reduce CT image noise. Information on both spatial and temporal structure continuity is utilized in sparse representation to improve the performance of the image processing method. Clinical cases were used to validate the proposed method.

  13. Computing 3-D structure of rigid objects using stereo and motion

    NASA Technical Reports Server (NTRS)

    Nguyen, Thinh V.

    1987-01-01

    Work performed as a step toward an intelligent automatic machine vision system for 3-D imaging is discussed. The problem considered is the quantitative 3-D reconstruction of rigid objects. Motion and stereo are the two clues considered in this system. The system basically consists of three processes: the low level process to extract image features, the middle level process to establish the correspondence in the stereo (spatial) and motion (temporal) modalities, and the high level process to compute the 3-D coordinates of the corner points by integrating the spatial and temporal correspondences.

  14. 3D photography in the objective analysis of volume augmentation including fat augmentation and dermal fillers.

    PubMed

    Meier, Jason D; Glasgold, Robert A; Glasgold, Mark J

    2011-11-01

    The authors present quantitative and objective 3D data from their studies showing long-term results with facial volume augmentation. The first study analyzes fat grafting of the midface and the second study presents augmentation of the tear trough with hyaluronic filler. Surgeons using 3D quantitative analysis can learn the duration of results and the optimal amount to inject, as well as showing patients results that are not demonstrable with standard, 2D photography. PMID:22004863

  15. Segmentation of 3D tubular objects with adaptive front propagation and minimal tree extraction for 3D medical imaging.

    PubMed

    Cohen, Laurent D; Deschamps, Thomas

    2007-08-01

    We present a new fast approach for segmentation of thin branching structures, like vascular trees, based on Fast-Marching (FM) and Level Set (LS) methods. FM allows segmentation of tubular structures by inflating a "long balloon" from a user given single point. However, when the tubular shape is rather long, the front propagation may blow up through the boundary of the desired shape close to the starting point. Our contribution is focused on a method to propagate only the useful part of the front while freezing the rest of it. We demonstrate its ability to segment quickly and accurately tubular and tree-like structures. We also develop a useful stopping criterion for the causal front propagation. We finally derive an efficient algorithm for extracting an underlying 1D skeleton of the branching objects, with minimal path techniques. Each branch being represented by its centerline, we automatically detect the bifurcations, leading to the "Minimal Tree" representation. This so-called "Minimal Tree" is very useful for visualization and quantification of the pathologies in our anatomical data sets. We illustrate our algorithms by applying it to several arteries datasets. PMID:17671862

  16. 2D virtual texture on 3D real object with coded structured light

    NASA Astrophysics Data System (ADS)

    Molinier, Thierry; Fofi, David; Salvi, Joaquim; Gorria, Patrick

    2008-02-01

    Augmented reality is used to improve color segmentation on human body or on precious no touch artifacts. We propose a technique to project a synthesized texture on real object without contact. Our technique can be used in medical or archaeological application. By projecting a suitable set of light patterns onto the surface of a 3D real object and by capturing images with a camera, a large number of correspondences can be found and the 3D points can be reconstructed. We aim to determine these points of correspondence between cameras and projector from a scene without explicit points and normals. We then project an adjusted texture onto the real object surface. We propose a global and automatic method to virtually texture a 3D real object.

  17. Gonio photometric imaging for recording of reflectance spectra of 3D objects

    NASA Astrophysics Data System (ADS)

    Miyake, Yoichi; Tsumura, Norimichi; Haneishi, Hideaki; Hayashi, Junichiro

    2002-06-01

    In recent years, it is required to develop a system for 3D capture of archives in museums and galleries. In visualizing of 3D object, it is important to reproduce both color and glossiness accurately. Our final goal is to construct digital archival systems in museum and Internet or virtual museum via World Wide Web. To archive our goal, we have developed the multi-spectral imaging systems to record and estimate reflectance spectra of the art paints based on principal component analysis and Wiener estimation method. In this paper, Gonio photometric imaging method is introduced for recording of 3D object. Five-band images of the object are taken under seven different illuminants angles. The set of five-band images are then analyzed on the basis of both dichromatic reflection model and Phong model to extract Gonio photometric information of the object. Prediction of reproduced images of the object under several illuminants and illumination angles is demonstrated and images that are synthesized with 3D wire frame image taken by 3D digitizer are also presented.

  18. Computer generated holograms of 3D objects with reduced number of projections

    NASA Astrophysics Data System (ADS)

    Huang, Su-juan; Liu, Dao-jin; Zhao, Jing-jing

    2010-11-01

    A new method for synthesizing computer-generated holograms of 3D objects has been proposed with reduced number of projections. According to the principles of paraboloid of revolution in 3D Fourier space, spectra information of 3D objects is gathered from projection images. We record a series of real projection images of 3D objects under incoherent white-light illumination by circular scanning method, and synthesize interpolated projection images by motion estimation and compensation between adjacent real projection images, then extract the spectra information of the 3D objects from all projection images in circle form. Because of quantization error, information extraction in two circles form is better than in single circle. Finally hologram is encoded based on computer-generated holography using a conjugate-symmetric extension. Our method significantly reduces the number of required real projections without increasing much of the computing time of the hologram and degrading the reconstructed image. Numerical reconstruction of the hologram shows good results.

  19. High-purity 3D nano-objects grown by focused-electron-beam induced deposition

    NASA Astrophysics Data System (ADS)

    Córdoba, Rosa; Sharma, Nidhi; Kölling, Sebastian; Koenraad, Paul M.; Koopmans, Bert

    2016-09-01

    To increase the efficiency of current electronics, a specific challenge for the next generation of memory, sensing and logic devices is to find suitable strategies to move from two- to three-dimensional (3D) architectures. However, the creation of real 3D nano-objects is not trivial. Emerging non-conventional nanofabrication tools are required for this purpose. One attractive method is focused-electron-beam induced deposition (FEBID), a direct-write process of 3D nano-objects. Here, we grow 3D iron and cobalt nanopillars by FEBID using diiron nonacarbonyl Fe2(CO)9, and dicobalt octacarbonyl Co2(CO)8, respectively, as starting materials. In addition, we systematically study the composition of these nanopillars at the sub-nanometer scale by atom probe tomography, explicitly mapping the homogeneity of the radial and longitudinal composition distributions. We show a way of fabricating high-purity 3D vertical nanostructures of ∼50 nm in diameter and a few micrometers in length. Our results suggest that the purity of such 3D nanoelements (above 90 at% Fe and above 95 at% Co) is directly linked to their growth regime, in which the selected deposition conditions are crucial for the final quality of the nanostructure. Moreover, we demonstrate that FEBID and the proposed characterization technique not only allow for growth and chemical analysis of single-element structures, but also offers a new way to directly study 3D core–shell architectures. This straightforward concept could establish a promising route to the design of 3D elements for future nano-electronic devices.

  20. High-purity 3D nano-objects grown by focused-electron-beam induced deposition.

    PubMed

    Córdoba, Rosa; Sharma, Nidhi; Kölling, Sebastian; Koenraad, Paul M; Koopmans, Bert

    2016-09-01

    To increase the efficiency of current electronics, a specific challenge for the next generation of memory, sensing and logic devices is to find suitable strategies to move from two- to three-dimensional (3D) architectures. However, the creation of real 3D nano-objects is not trivial. Emerging non-conventional nanofabrication tools are required for this purpose. One attractive method is focused-electron-beam induced deposition (FEBID), a direct-write process of 3D nano-objects. Here, we grow 3D iron and cobalt nanopillars by FEBID using diiron nonacarbonyl Fe2(CO)9, and dicobalt octacarbonyl Co2(CO)8, respectively, as starting materials. In addition, we systematically study the composition of these nanopillars at the sub-nanometer scale by atom probe tomography, explicitly mapping the homogeneity of the radial and longitudinal composition distributions. We show a way of fabricating high-purity 3D vertical nanostructures of ∼50 nm in diameter and a few micrometers in length. Our results suggest that the purity of such 3D nanoelements (above 90 at% Fe and above 95 at% Co) is directly linked to their growth regime, in which the selected deposition conditions are crucial for the final quality of the nanostructure. Moreover, we demonstrate that FEBID and the proposed characterization technique not only allow for growth and chemical analysis of single-element structures, but also offers a new way to directly study 3D core-shell architectures. This straightforward concept could establish a promising route to the design of 3D elements for future nano-electronic devices. PMID:27454835

  1. Automatic 360-deg profilometry of a 3D object using a shearing interferometer and virtual grating

    NASA Astrophysics Data System (ADS)

    Zhang, Yong-Lin; Bu, Guixue

    1996-10-01

    Phase measuring technique has been widely used in optical precision inspection for its extraordinary advantage. We use the phase-measuring technique and design a practical instrument for measuring 360 degrees profile of 3D object. A novel method that can realize profile detection with higher speed and lower cost is proposed. Phase unwrapping algorithm based on the second order differentiation is developed. A complete 3D shape is reconstructed from a series of line- section profiles corresponding to discrete angular position of the object. The profile-jointing procedure is only related with two fixed parameters and coordination transformation.

  2. 3D-Web-GIS RFID Location Sensing System for Construction Objects

    PubMed Central

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency. PMID:23864821

  3. Detection, segmentation and classification of 3D urban objects using mathematical morphology and supervised learning

    NASA Astrophysics Data System (ADS)

    Serna, Andrés; Marcotegui, Beatriz

    2014-07-01

    We propose an automatic and robust approach to detect, segment and classify urban objects from 3D point clouds. Processing is carried out using elevation images and the result is reprojected onto the 3D point cloud. First, the ground is segmented and objects are detected as discontinuities on the ground. Then, connected objects are segmented using a watershed approach. Finally, objects are classified using SVM with geometrical and contextual features. Our methodology is evaluated on databases from Ohio (USA) and Paris (France). In the former, our method detects 98% of the objects, 78% of them are correctly segmented and 82% of the well-segmented objects are correctly classified. In the latter, our method leads to an improvement of about 15% on the classification step with respect to previous works. Quantitative results prove that our method not only provides a good performance but is also faster than other works reported in the literature.

  4. 3D shape shearography with integrated structured light projection for strain inspection of curved objects

    NASA Astrophysics Data System (ADS)

    Anisimov, Andrei G.; Groves, Roger M.

    2015-05-01

    Shearography (speckle pattern shearing interferometry) is a non-destructive testing technique that provides full-field surface strain characterization. Although real-life objects especially in aerospace, transport or cultural heritage are not flat (e.g. aircraft leading edges or sculptures), their inspection with shearography is of interest for both hidden defect detection and material characterization. Accurate strain measuring of a highly curved or free form surface needs to be performed by combining inline object shape measuring and processing of shearography data in 3D. Previous research has not provided a general solution. This research is devoted to the practical questions of 3D shape shearography system development for surface strain characterization of curved objects. The complete procedure of calibration and data processing of a 3D shape shearography system with integrated structured light projector is presented. This includes an estimation of the actual shear distance and a sensitivity matrix correction within the system field of view. For the experimental part a 3D shape shearography system prototype was developed. It employs three spatially-distributed shearing cameras, with Michelson interferometers acting as the shearing devices, one illumination laser source and a structured light projector. The developed system performance was evaluated with a previously reported cylinder specimen (length 400 mm, external diameter 190 mmm) loaded by internal pressure. Further steps for the 3D shape shearography prototype and the technique development are also proposed.

  5. The 3D representation of the new transformation from the terrestrial to the celestial system.

    NASA Astrophysics Data System (ADS)

    Dehant, V.; de Viron, O.; Capitaine, N.

    2006-08-01

    To study the sky from the Earth or to use navigation satellites, we need two reference systems, a celestial reference system, as fixed as possible with respect to the inertial frame, and a terrestrial reference system, rotating with the Earth. Additionally, we need a way to go from one reference system to the other. This transformation involves the Earth rotation rate, the polar motion, and the precession-nutation. This transformation is done using an intermediate system, in which the Earth rotation it-self is corrected for. Previously one used an intermediate system related to the equinox; the new paradigm involved a point, denoted the Celestial Intermediate Origin (CIO), which, due to its kinematical property of "Non Rotating Origin", allows better describing the length-of-day of the Earth. The use or not of the CIO only affects this intermediate frame. The new transformation system involving the CIO is additionally much simpler. Moreover, the use of the CIO allows an elegant separation between the polar motion, the precession nutation and the rotation rate variation. In this presentation we will show 3D representations that explain all this.

  6. Depth-based representations: Which coding format for 3D video broadcast applications?

    NASA Astrophysics Data System (ADS)

    Kerbiriou, Paul; Boisson, Guillaume; Sidibé, Korian; Huynh-Thu, Quan

    2011-03-01

    3D Video (3DV) delivery standardization is currently ongoing in MPEG. Now time is to choose 3DV data representation format. What is at stake is the final quality for end-users, i.e. synthesized views' visual quality. We focus on two major rival depth-based formats, namely Multiview Video plus Depth (MVD) and Layered Depth Video (LDV). MVD can be considered as the basic depth-based 3DV format, generated by disparity estimation from multiview sequences. LDV is more sophisticated, with the compaction of multiview data into color- and depth-occlusions layers. We compare final views quality using MVD2 and LDV (both containing two color channels plus two depth components) coded with MVC at various compression ratios. Depending on the format, the appropriate synthesis process is performed to generate final stereoscopic pairs. Comparisons are provided in terms of SSIM and PSNR with respect to original views and to synthesized references (obtained without compression). Eventually, LDV outperforms significantly MVD when using state-of-the-art reference synthesis algorithms. Occlusions management before encoding is advantageous in comparison with handling redundant signals at decoder side. Besides, we observe that depth quantization does not induce much loss on the final view quality until a significant degradation level. Improvements in disparity estimation and view synthesis algorithms are therefore still expected during the remaining standardization steps.

  7. Holographic display of real existing objects from their 3D Fourier spectrum

    NASA Astrophysics Data System (ADS)

    Yatagai, Toyohiko; Sando, Yusuke

    2005-02-01

    A method for synthesizing computer-generated holograms of real-existing objects is described. A series of projection images are recorded both vertically and horizontally with an incoherent light source and a color CCD camera. According to the principle of computer tomography(CT), the 3-D Fourier spectrum is calculated from several projection images of objects and the Fresnel computer-generated hologram(CGH) is synthesized using a part of the 3-D Fourier spectrum. This method has following advantages. At first, no-blur reconstructed images in any direction are obtained owing to two-dimensionally scanning in recording. Secondarily, since not interference fringes but simple projection images of objects are recorded, a coherent light source is not necessary for recording. The use of a color CCD in recording enables us to record and reconstruct colorful objects. Finally, we demonstrate color reconstruction of objects both numerically and optically.

  8. Recognition by Humans and Pigeons of Novel Views of 3-D Objects and Their Photographs

    ERIC Educational Resources Information Center

    Friedman, Alinda; Spetch, Marcia L.; Ferrey, Anne

    2005-01-01

    Humans and pigeons were trained to discriminate between 2 views of actual 3-D objects or their photographs. They were tested on novel views that were either within the closest rotational distance between the training views (interpolated) or outside of that range (extrapolated). When training views were 60? apart, pigeons, but not humans,…

  9. Modeling and modification of medical 3D objects. The benefit of using a haptic modeling tool.

    PubMed

    Kling-Petersen, T; Rydmark, M

    2000-01-01

    any given amount of smoothing to the object. While the final objects need to be exported for further 3D graphic manipulation, FreeForm addresses one of the most time consuming problems of 3D modeling: modification and creation of non-geometric 3D objects. PMID:10977532

  10. Fast error simulation of optical 3D measurements at translucent objects

    NASA Astrophysics Data System (ADS)

    Lutzke, P.; Kühmstedt, P.; Notni, G.

    2012-09-01

    The scan results of optical 3D measurements at translucent objects deviate from the real objects surface. This error is caused by the fact that light is scattered in the objects volume and is not exclusively reflected at its surface. A few approaches were made to separate the surface reflected light from the volume scattered. For smooth objects the surface reflected light is dominantly concentrated in specular direction and could only be observed from a point in this direction. Thus the separation either leads to measurement results only creating data for near specular directions or provides data from not well separated areas. To ensure the flexibility and precision of optical 3D measurement systems for translucent materials it is necessary to enhance the understanding of the error forming process. For this purpose a technique for simulating the 3D measurement at translucent objects is presented. A simple error model is shortly outlined and extended to an efficient simulation environment based upon ordinary raytracing methods. In comparison the results of a Monte-Carlo simulation are presented. Only a few material and object parameters are needed for the raytracing simulation approach. The attempt of in-system collection of these material and object specific parameters is illustrated. The main concept of developing an error-compensation method based on the simulation environment and the collected parameters is described. The complete procedure is using both, the surface reflected and the volume scattered light for further processing.

  11. Identification and Detection of Simple 3D Objects with Severely Blurred Vision

    PubMed Central

    Kallie, Christopher S.; Legge, Gordon E.; Yu, Deyue

    2012-01-01

    Purpose. Detecting and recognizing three-dimensional (3D) objects is an important component of the visual accessibility of public spaces for people with impaired vision. The present study investigated the impact of environmental factors and object properties on the recognition of objects by subjects who viewed physical objects with severely reduced acuity. Methods. The experiment was conducted in an indoor testing space. We examined detection and identification of simple convex objects by normally sighted subjects wearing diffusing goggles that reduced effective acuity to 20/900. We used psychophysical methods to examine the effect on performance of important environmental variables: viewing distance (from 10–24 feet, or 3.05–7.32 m) and illumination (overhead fluorescent and artificial window), and object variables: shape (boxes and cylinders), size (heights from 2–6 feet, or 0.61–1.83 m), and color (gray and white). Results. Object identification was significantly affected by distance, color, height, and shape, as well as interactions between illumination, color, and shape. A stepwise regression analysis showed that 64% of the variability in identification could be explained by object contrast values (58%) and object visual angle (6%). Conclusions. When acuity is severely limited, illumination, distance, color, height, and shape influence the identification and detection of simple 3D objects. These effects can be explained in large part by the impact of these variables on object contrast and visual angle. Basic design principles for improving object visibility are discussed. PMID:23111613

  12. Status of the phenomena representation, 3D modeling, and cloud-based software architecture development

    SciTech Connect

    Smith, Curtis L.; Prescott, Steven; Kvarfordt, Kellie; Sampath, Ram; Larson, Katie

    2015-09-01

    Early in 2013, researchers at the Idaho National Laboratory outlined a technical framework to support the implementation of state-of-the-art probabilistic risk assessment to predict the safety performance of advanced small modular reactors. From that vision of the advanced framework for risk analysis, specific tasks have been underway in order to implement the framework. This report discusses the current development of a several tasks related to the framework implementation, including a discussion of a 3D physics engine that represents the motion of objects (including collision and debris modeling), cloud-based analysis tools such as a Bayesian-inference engine, and scenario simulations. These tasks were performed during 2015 as part of the technical work associated with the Advanced Reactor Technologies Program.

  13. Cognitive/emotional models for human behavior representation in 3D avatar simulations

    NASA Astrophysics Data System (ADS)

    Peterson, James K.

    2004-08-01

    Simplified models of human cognition and emotional response are presented which are based on models of auditory/ visual polymodal fusion. At the core of these models is a computational model of Area 37 of the temporal cortex which is based on new isocortex models presented recently by Grossberg. These models are trained using carefully chosen auditory (musical sequences), visual (paintings) and higher level abstract (meta level) data obtained from studies of how optimization strategies are chosen in response to outside managerial inputs. The software modules developed are then used as inputs to character generation codes in standard 3D virtual world simulations. The auditory and visual training data also enable the development of simple music and painting composition generators which significantly enhance one's ability to validate the cognitive model. The cognitive models are handled as interacting software agents implemented as CORBA objects to allow the use of multiple language coding choices (C++, Java, Python etc) and efficient use of legacy code.

  14. Printing of metallic 3D micro-objects by laser induced forward transfer.

    PubMed

    Zenou, Michael; Kotler, Zvi

    2016-01-25

    Digital printing of 3D metal micro-structures by laser induced forward transfer under ambient conditions is reviewed. Recent progress has allowed drop on demand transfer of molten, femto-liter, metal droplets with a high jetting directionality. Such small volume droplets solidify instantly, on a nanosecond time scale, as they touch the substrate. This fast solidification limits their lateral spreading and allows the fabrication of high aspect ratio and complex 3D metal structures. Several examples of micron-scale resolution metal objects printed using this method are presented and discussed. PMID:26832524

  15. Close-Range Photogrammetric Tools for Small 3d Archeological Objects

    NASA Astrophysics Data System (ADS)

    Samaan, M.; Héno, R.; Pierrot-Deseilligny, M.

    2013-07-01

    This article will focus on the first experiments carried out for our PHD thesis, which is meant to make the new image-based methods available for archeologists. As a matter of fact, efforts need to be made to find cheap, efficient and user-friendly procedures for image acquisition, data processing and quality control. Among the numerous tasks that archeologists have to face daily is the 3D recording of very small objects. The Apero/MicMac tools were used for the georeferencing and the dense correlation procedures. Relatively standard workflows lead to depth maps, which can be represented either as 3D point clouds or shaded relief images.

  16. Anticipatory Spatial Representation of 3D Regions Explored by Sighted Observers and a Deaf-and-Blind-Observer

    ERIC Educational Resources Information Center

    Intraub, Helene

    2004-01-01

    Viewers who study photographs of scenes tend to remember having seen beyond the boundaries of the view ["boundary extension"; J. Exp. Psychol. Learn. Mem. Cogn. 15 (1989) 179]. Is this a fundamental aspect of scene representation? Forty undergraduates explored bounded regions of six common (3D) scenes, visually or haptically (while blindfolded)…

  17. INSTRUMENTS AND METHODS OF INVESTIGATION: Computer generated three-dimensional representations of objects

    NASA Astrophysics Data System (ADS)

    Vedenov, A. A.

    1994-09-01

    Today scientists can create, with the aid of a personal computer three-dimensional (3D) representations of objects—a specific data base, containing not only the space coordinates and colours of all points of an object, but also allowing it be examined from a bird's-eye point of view. The data base reveals the characteristic features of the object as a whole and allows them to be named. Examples of 3D representations are given and the principles of their creation and viewing are discussed.

  18. Acquiring 3-D information about thick objects from differential interference contrast images using texture extraction

    NASA Astrophysics Data System (ADS)

    Sierra, Heidy; Brooks, Dana; Dimarzio, Charles

    2010-07-01

    The extraction of 3-D morphological information about thick objects is explored in this work. We extract this information from 3-D differential interference contrast (DIC) images by applying a texture detection method. Texture extraction methods have been successfully used in different applications to study biological samples. A 3-D texture image is obtained by applying a local entropy-based texture extraction method. The use of this method to detect regions of blastocyst mouse embryos that are used in assisted reproduction techniques such as in vitro fertilization is presented as an example. Results demonstrate the potential of using texture detection methods to improve morphological analysis of thick samples, which is relevant to many biomedical and biological studies. Fluorescence and optical quadrature microscope phase images are used for validation.

  19. Systems in Development: Motor Skill Acquisition Facilitates 3D Object Completion

    PubMed Central

    Soska, Kasey C.; Adolph, Karen E.; Johnson, Scott P.

    2009-01-01

    How do infants learn to perceive the backs of objects that they see only from a limited viewpoint? Infants’ 3D object completion abilities emerge in conjunction with developing motor skills—independent sitting and visual-manual exploration. Twenty-eight 4.5- to 7.5-month-old infants were habituated to a limited-view object and tested with volumetrically complete and incomplete (hollow) versions of the same object. Parents reported infants’ sitting experience, and infants’ visual-manual exploration of objects was observed in a structured play session. Infants’ self-sitting experience and visual-manual exploratory skills predicted looking to the novel, incomplete object on the habituation task. Further analyses revealed that self-sitting facilitated infants’ visual inspection of objects while they manipulated them. The results are framed within a developmental systems approach, wherein infants’ sitting skill, multimodal object exploration, and object knowledge are linked in developmental time. PMID:20053012

  20. The effect of background and illumination on color identification of real, 3D objects

    PubMed Central

    Allred, Sarah R.; Olkkonen, Maria

    2013-01-01

    For the surface reflectance of an object to be a useful cue to object identity, judgments of its color should remain stable across changes in the object's environment. In 2D scenes, there is general consensus that color judgments are much more stable across illumination changes than background changes. Here we investigate whether these findings generalize to real 3D objects. Observers made color matches to cubes as we independently varied both the illumination impinging on the cube and the 3D background of the cube. As in 2D scenes, we found relatively high but imperfect stability of color judgments under an illuminant shift. In contrast to 2D scenes, we found that background had little effect on average color judgments. In addition, variability of color judgments was increased by an illuminant shift and decreased by embedding the cube within a background. Taken together, these results suggest that in real 3D scenes with ample cues to object segregation, the addition of a background may improve stability of color identification. PMID:24273521

  1. a Low-Cost and Portable System for 3d Reconstruction of Texture-Less Objects

    NASA Astrophysics Data System (ADS)

    Hosseininaveh, A.; Yazdan, R.; Karami, A.; Moradi, M.; Ghorbani, F.

    2015-12-01

    The optical methods for 3D modelling of objects can be classified into two categories including image-based and range-based methods. Structure from Motion is one of the image-based methods implemented in commercial software. In this paper, a low-cost and portable system for 3D modelling of texture-less objects is proposed. This system includes a rotating table designed and developed by using a stepper motor and a very light rotation plate. The system also has eight laser light sources with very dense and strong beams which provide a relatively appropriate pattern on texture-less objects. In this system, regarding to the step of stepper motor, images are semi automatically taken by a camera. The images can be used in structure from motion procedures implemented in Agisoft software.To evaluate the performance of the system, two dark objects were used. The point clouds of these objects were obtained by spraying a light powders on the objects and exploiting a GOM laser scanner. Then these objects were placed on the proposed turntable. Several convergent images were taken from each object while the laser light sources were projecting the pattern on the objects. Afterward, the images were imported in VisualSFM as a fully automatic software package for generating an accurate and complete point cloud. Finally, the obtained point clouds were compared to the point clouds generated by the GOM laser scanner. The results showed the ability of the proposed system to produce a complete 3D model from texture-less objects.

  2. Approximation of a foreign object using x-rays, reference photographs and 3D reconstruction techniques.

    PubMed

    Briggs, Matt; Shanmugam, Mohan

    2013-12-01

    This case study describes how a 3D animation was created to approximate the depth and angle of a foreign object (metal bar) that had become embedded into a patient's head. A pre-operative CT scan was not available as the patient could not fit though the CT scanner, therefore a post surgical CT scan, x-ray and photographic images were used. A surface render was made of the skull and imported into Blender (a 3D animation application). The metal bar was not available, however images of a similar object that was retrieved from the scene by the ambulance crew were used to recreate a 3D model. The x-ray images were then imported into Blender and used as background images in order to align the skull reconstruction and metal bar at the correct depth/angle. A 3D animation was then created to fully illustrate the angle and depth of the iron bar in the skull. PMID:24206011

  3. Hybrid system of optics and computer for 3-D object recognition

    NASA Astrophysics Data System (ADS)

    Li, Qun Z.; Miao, Peng C.; He, Anzhi

    1992-03-01

    In this paper, a hybrid system of optics and computer for 3D object recognition is presented. The system consists of a Twyman-Green interferometer, a He-Ne laser, a computer, a TV camera, and an image processor. The structured light produced by a Twyman-Green interferometer is split in and illuminates objects in two directions at the same time. Moire contour is formed on the surface of object. In order to delete unwanted patterns in moire contour, we don't utilize the moire contour on the surface of object. We place a TV camera in the middle of the angle between two illuminating directions and take two groups of deformed fringes on the surface of objects. Two groups of deformed fringes are processed using the digital image processing system controlled and operated by XOR logic in the computer, moire fringes are then extracted from the complicated environment. 3D coordinates of points of the object are obtained after moire fringe is followed, and points belonging to the same fringe are given the same altitude. The object is described by its projected drawings in three coordinate planes. The projected drawings in three coordinate planes of the known objects are stored in the library of judgment. The object can be recognized by inquiring the library of judgment.

  4. Intraclass retrieval of nonrigid 3D objects: application to face recognition.

    PubMed

    Passalis, Georgios; Kakadiaris, Ioannis A; Theoharis, Theoharis

    2007-02-01

    As the size of the available collections of 3D objects grows, database transactions become essential for their management with the key operation being retrieval (query). Large collections are also precategorized into classes so that a single class contains objects of the same type (e.g., human faces, cars, four-legged animals). It is shown that general object retrieval methods are inadequate for intraclass retrieval tasks. We advocate that such intraclass problems require a specialized method that can exploit the basic class characteristics in order to achieve higher accuracy. A novel 3D object retrieval method is presented which uses a parameterized annotated model of the shape of the class objects, incorporating its main characteristics. The annotated subdivision-based model is fitted onto objects of the class using a deformable model framework, converted to a geometry image and transformed into the wavelet domain. Object retrieval takes place in the wavelet domain. The method does not require user interaction, achieves high accuracy, is efficient for use with large databases, and is suitable for nonrigid object classes. We apply our method to the face recognition domain, one of the most challenging intraclass retrieval tasks. We used the Face Recognition Grand Challenge v2 database, yielding an average verification rate of 95.2 percent at 10-3 false accept rate. The latest results of our work can be found at http://www.cbl.uh.edu/UR8D/. PMID:17170476

  5. Registration of untypical 3D objects in Polish cadastre - do we need 3D cadastre? / Rejestracja nietypowych obiektów 3D w polskim katastrze - czy istnieje potrzeba wdrożenia katastru 3D?

    NASA Astrophysics Data System (ADS)

    Marcin, Karabin

    2012-11-01

    Polish cadastral system consists of two registers: cadastre and land register. The cadastre register data on cadastral objects (land, buildings and premises) in particular location (in a two-dimensional coordinate system) and their attributes as well as data about the owners. The land register contains data concerned ownerships and other rights to the property. Registration of a land parcel without spatial objects located on the surface is not problematic. Registration of buildings and premises in typical cases is not a problem either. The situation becomes more complicated in cases of multiple use of space above the parcel and with more complex construction of the buildings. The paper presents rules concerning the registration of various untypical 3D objects located within the city of Warsaw. The analysis of the data concerning those objects registered in the cadastre and land register is presented in the paper. And this is the next part of the author's detailed research. The aim of this paper is to answer the question if we really need 3D cadastre in Poland. Polski system katastralny składa się z dwóch rejestrów: ewidencji gruntów i budynków (katastru nieruchomosci) oraz ksiąg wieczystych. W ewidencji gruntów i budynków (katastrze nieruchomości) rejestrowane są dane o położeniu (w dwuwymiarowym układzie współrzędnych), atrybuty oraz dane o właścicielach obiektów katastralnych (działek, budynków i lokali), w księgach wieczystych oprócz danych właścicielskich, inne prawa do nieruchomości. Rejestracja działki bez obiektów przestrzennych położonych na jej powierzchni nie stanowi problemu. Także rejestracja budynków i lokali w typowych przypadkach nie stanowi trudności. Sytuacja staje się bardziej skomplikowana w przypadku wielokrotnego użytkowania przestrzeni powyzej lub poniżej powierzchni działki oraz w przypadku budynków o złożonej konstrukcji. W artykule przedstawiono zasady związane z rejestracją nietypowych obiektów 3

  6. An effective 3D leapfrog scheme for electromagnetic modelling of arbitrary shaped dielectric objects using unstructured meshes

    NASA Astrophysics Data System (ADS)

    Gansen, A.; El Hachemi, M.; Belouettar, S.; Hassan, O.; Morgan, K.

    2015-12-01

    In computational electromagnetics, the advantages of the standard Yee algorithm are its simplicity and its low computational costs. However, because of the accuracy losses resulting from the staircased representation of curved interfaces, it is normally not the method of choice for modelling electromagnetic interactions with objects of arbitrary shape. For these problems, an unstructured mesh finite volume time domain method is often employed, although the scheme does not satisfy the divergence free condition at the discrete level. In this paper, we generalize the standard Yee algorithm for use on unstructured meshes and solve the problem concerning the loss of accuracy linked to staircasing, while preserving the divergence free nature of the algorithm. The scheme is implemented on high quality primal Delaunay and dual Voronoi meshes. The performance of the approach was validated in previous work by simulating the scattering of electromagnetic waves by spherical 3D PEC objects in free space. In this paper we demonstrate the performance of this scheme for penetration problems in lossy dielectrics using a new averaging technique for Delaunay and Voronoi edges at the interface. A detailed explanation of the implementation of the method, and a demonstration of the quality of the results obtained for transmittance and scattering simulations by 3D objects of arbitrary shapes, are presented.

  7. 220GHz wideband 3D imaging radar for concealed object detection technology development and phenomenology studies

    NASA Astrophysics Data System (ADS)

    Robertson, Duncan A.; Macfarlane, David G.; Bryllert, Tomas

    2016-05-01

    We present a 220 GHz 3D imaging `Pathfinder' radar developed within the EU FP7 project CONSORTIS (Concealed Object Stand-Off Real-Time Imaging for Security) which has been built to address two objectives: (i) to de-risk the radar hardware development and (ii) to enable the collection of phenomenology data with ~1 cm3 volumetric resolution. The radar combines a DDS-based chirp generator and self-mixing multiplier technology to achieve a 30 GHz bandwidth chirp with such high linearity that the raw point response is close to ideal and only requires minor nonlinearity compensation. The single transceiver is focused with a 30 cm lens mounted on a gimbal to acquire 3D volumetric images of static test targets and materials.

  8. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    PubMed

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  9. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    PubMed Central

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  10. Orienting Attention to Sound Object Representations Attenuates Change Deafness

    ERIC Educational Resources Information Center

    Backer, Kristina C.; Alain, Claude

    2012-01-01

    According to the object-based account of attention, multiple objects coexist in short-term memory (STM), and we can selectively attend to a particular object of interest. Although there is evidence that attention can be directed to visual object representations, the assumption that attention can be oriented to sound object representations has yet…

  11. Study on Information Management for the Conservation of Traditional Chinese Architectural Heritage - 3d Modelling and Metadata Representation

    NASA Astrophysics Data System (ADS)

    Yen, Y. N.; Weng, K. H.; Huang, H. Y.

    2013-07-01

    After over 30 years of practise and development, Taiwan's architectural conservation field is moving rapidly into digitalization and its applications. Compared to modern buildings, traditional Chinese architecture has considerably more complex elements and forms. To document and digitize these unique heritages in their conservation lifecycle is a new and important issue. This article takes the caisson ceiling of the Taipei Confucius Temple, octagonal with 333 elements in 8 types, as a case study for digitization practise. The application of metadata representation and 3D modelling are the two key issues to discuss. Both Revit and SketchUp were appliedin this research to compare its effectiveness to metadata representation. Due to limitation of the Revit database, the final 3D models wasbuilt with SketchUp. The research found that, firstly, cultural heritage databasesmustconvey that while many elements are similar in appearance, they are unique in value; although 3D simulations help the general understanding of architectural heritage, software such as Revit and SketchUp, at this stage, could onlybe used tomodel basic visual representations, and is ineffective indocumenting additional critical data ofindividually unique elements. Secondly, when establishing conservation lifecycle information for application in management systems, a full and detailed presentation of the metadata must also be implemented; the existing applications of BIM in managing conservation lifecycles are still insufficient. Results of the research recommends SketchUp as a tool for present modelling needs, and BIM for sharing data between users, but the implementation of metadata representation is of the utmost importance.

  12. Comparison of 3D representations depicting micro folds: overlapping imagery vs. time-of-flight laser scanner

    NASA Astrophysics Data System (ADS)

    Vaiopoulos, Aristidis D.; Georgopoulos, Andreas; Lozios, Stylianos G.

    2012-10-01

    A relatively new field of interest, which continuously gains grounds nowadays, is digital 3D modeling. However, the methodologies, the accuracy and the time and effort required to produce a high quality 3D model have been changing drastically the last few years. Whereas in the early days of digital 3D modeling, 3D models were only accessible to computer experts in animation, working many hours in expensive sophisticated software, today 3D modeling has become reasonably fast and convenient. On top of that, with online 3D modeling software, such as 123D Catch, nearly everyone can produce 3D models with minimum effort and at no cost. The only requirement is panoramic overlapping images, of the (still) objects the user wishes to model. This approach however, has limitations in the accuracy of the model. An objective of the study is to examine these limitations by assessing the accuracy of this 3D modeling methodology, with a Terrestrial Laser Scanner (TLS). Therefore, the scope of this study is to present and compare 3D models, produced with two different methods: 1) Traditional TLS method with the instrument ScanStation 2 by Leica and 2) Panoramic overlapping images obtained with DSLR camera and processed with 123D Catch free software. The main objective of the study is to evaluate advantages and disadvantages of the two 3D model producing methodologies. The area represented with the 3D models, features multi-scale folding in a cipollino marble formation. The most interesting part and most challenging to capture accurately, is an outcrop which includes vertically orientated micro folds. These micro folds have dimensions of a few centimeters while a relatively strong relief is evident between them (perhaps due to different material composition). The area of interest is located in Mt. Hymittos, Greece.

  13. An object-oriented simulator for 3D digital breast tomosynthesis imaging system.

    PubMed

    Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468

  14. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    PubMed

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  15. Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

    PubMed Central

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454

  16. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation.

    PubMed

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454

  17. A Global Hypothesis Verification Framework for 3D Object Recognition in Clutter.

    PubMed

    Aldoma, Aitor; Tombari, Federico; Stefano, Luigi Di; Vincze, Markus

    2016-07-01

    Pipelines to recognize 3D objects despite clutter and occlusions usually end up with a final verification stage whereby recognition hypotheses are validated or dismissed based on how well they explain sensor measurements. Unlike previous work, we propose a Global Hypothesis Verification (GHV) approach which regards all hypotheses jointly so as to account for mutual interactions. GHV provides a principled framework to tackle the complexity of our visual world by leveraging on a plurality of recognition paradigms and cues. Accordingly, we present a 3D object recognition pipeline deploying both global and local 3D features as well as shape and color. Thereby, and facilitated by the robustness of the verification process, diverse object hypotheses can be gathered and weak hypotheses need not be suppressed too early to trade sensitivity for specificity. Experiments demonstrate the effectiveness of our proposal, which significantly improves over the state-of-art and attains ideal performance (no false negatives, no false positives) on three out of the six most relevant and challenging benchmark datasets. PMID:26485476

  18. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    PubMed Central

    Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  19. Applying Mean-Shift - Clustering for 3D object detection in remote sensing data

    NASA Astrophysics Data System (ADS)

    Simon, Jürgen-Lorenz; Diederich, Malte; Troemel, Silke

    2013-04-01

    The timely warning and forecasting of high-impact weather events is crucial for life, safety and economy. Therefore, the development and improvement of methods for detection and nowcasting / short-term forecasting of these events is an ongoing research question. A new 3D object detection and tracking algorithm is presented. Within the project "object-based analysis and seamless predictin (OASE)" we address a better understanding and forecasting of convective events based on the synergetic use of remotely sensed data and new methods for detection, nowcasting, validation and assimilation. In order to gain advanced insight into the lifecycle of convective cells, we perform an object-detection on a new high-resolution 3D radar- and satellite based composite and plan to track the detected objects over time, providing us with a model of the lifecycle. The insights in the lifecycle will be used in order to improve prediction of convective events in the nowcasting time scale, as well as a new type of data to be assimilated into numerical weather models, thus seamlessly bridging the gap between nowcasting and NWP.. The object identification (or clustering) is performed using a technique borrowed from computer vision, called mean-shift clustering. Mean-Shift clustering works without many of the parameterizations or rigid threshold schemes employed by many existing schemes (e. g. KONRAD, TITAN, Trace-3D), which limit the tracking to fully matured, convective cells of significant size and/or strength. Mean-Shift performs without such limiting definitions, providing a wider scope for studying larger classes of phenomena and providing a vehicle for research into the object definition itself. Since the mean-shift clustering technique could be applied on many types of remote-sensing and model data for object detection, it is of general interest to the remote sensing and modeling community. The focus of the presentation is the introduction of this technique and the results of its

  20. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Smith, Philip W.

    1993-01-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  1. Determining canonical views of 3D object using minimum description length criterion and compressive sensing method

    NASA Astrophysics Data System (ADS)

    Chen, Ping-Feng; Krim, Hamid

    2008-02-01

    In this paper, we propose using two methods to determine the canonical views of 3D objects: minimum description length (MDL) criterion and compressive sensing method. MDL criterion searches for the description length that achieves the balance between model accuracy and parsimony. It takes the form of the sum of a likelihood and a penalizing term, where the likelihood is in favor of model accuracy such that more views assists the description of an object, while the second term penalizes lengthy description to prevent overfitting of the model. In order to devise the likelihood term, we propose a model to represent a 3D object as the weighted sum of multiple range images, which is used in the second method to determine the canonical views as well. In compressive sensing method, an intelligent way of parsimoniously sampling an object is presented. We make direct inference from Donoho1 and Candes'2 work, and adapt it to our model. Each range image is viewed as a projection, or a sample, of a 3D model, and by using compressive sensing theory, we are able to reconstruct the object with an overwhelming probability by scarcely sensing the object in a random manner. Compressive sensing is different from traditional compressing method in the sense that the former compress things in the sampling stage while the later collects a large number of samples and then compressing mechanism is carried out thereafter. Compressive sensing scheme is particularly useful when the number of sensors are limited or the sampling machinery cost much resource or time.

  2. 3D object optonumerical acquisition methods for CAD/CAM and computer graphics systems

    NASA Astrophysics Data System (ADS)

    Sitnik, Robert; Kujawinska, Malgorzata; Pawlowski, Michal E.; Woznicki, Jerzy M.

    1999-08-01

    The creation of a virtual object for CAD/CAM and computer graphics on the base of data gathered by full-field optical measurement of 3D object is presented. The experimental co- ordinates are alternatively obtained by combined fringe projection/photogrammetry based system or fringe projection/virtual markers setup. The new and fully automatic procedure which process the cloud of measured points into triangular mesh accepted by CAD/CAM and computer graphics systems is presented. Its applicability for various classes of objects is tested including the error analysis of virtual objects generated. The usefulness of the method is proved by applying the virtual object in rapid prototyping system and in computer graphics environment.

  3. Flexible simulation strategy for modeling 3D cultural objects based on multisource remotely sensed imagery

    NASA Astrophysics Data System (ADS)

    Guienko, Guennadi; Levin, Eugene

    2003-01-01

    New ideas and solutions never come alone. Although automated feature extraction is not sufficiently mature to move from the realm of scientific investigation into the category of production technology, a new goal has arisen: 3D simulation of real-world objects, extracted from images. This task, which evolved from feature extraction and is not an easy task itself, becomes even more complex, multi-leveled, and often uncertain and fuzzy when one exploits time-sequenced multi-source remotely sensed visual data. The basic components of the process are familiar image processing tasks: fusion of various types of imagery, automatic recognition of objects, removng those objects from the source images, and replacing them in the images with their realistic simulated "twin" object rendering. This paper discusses how to aggregate the most appropriate approach to each task into one technological process in order to develop a Manipulator for Visual Simulation of 3D objects (ManVIS) that is independent or imagery/format/media. The technology could be made general by combining a number of competent special purpose algorithms under appropriate contextual, geometric, spatial, and temporal constraints derived from a-priori knowledge. This could be achieved by planning the simulation in an Open Structure Simulation Strategy Manager (O3SM) a distinct component of ManVIS building the simulation strategy before beginning actual image manipulation.

  4. Differential and relaxed image foresting transform for graph-cut segmentation of multiple 3D objects.

    PubMed

    Moya, Nikolas; Falcão, Alexandre X; Ciesielski, Krzysztof C; Udupa, Jayaram K

    2014-01-01

    Graph-cut algorithms have been extensively investigated for interactive binary segmentation, when the simultaneous delineation of multiple objects can save considerable user's time. We present an algorithm (named DRIFT) for 3D multiple object segmentation based on seed voxels and Differential Image Foresting Transforms (DIFTs) with relaxation. DRIFT stands behind efficient implementations of some state-of-the-art methods. The user can add/remove markers (seed voxels) along a sequence of executions of the DRIFT algorithm to improve segmentation. Its first execution takes linear time with the image's size, while the subsequent executions for corrections take sublinear time in practice. At each execution, DRIFT first runs the DIFT algorithm, then it applies diffusion filtering to smooth boundaries between objects (and background) and, finally, it corrects possible objects' disconnection occurrences with respect to their seeds. We evaluate DRIFT in 3D CT-images of the thorax for segmenting the arterial system, esophagus, left pleural cavity, right pleural cavity, trachea and bronchi, and the venous system. PMID:25333179

  5. Uncertainty analysis in 3D global models: Aerosol representation in MOZART-4

    NASA Astrophysics Data System (ADS)

    Gasore, J.; Prinn, R. G.

    2012-12-01

    The Probabilistic Collocation Method (PCM) has been proven to be an efficient general method of uncertainty analysis in atmospheric models (Tatang et al 1997, Cohen&Prinn 2011). However, its application has been mainly limited to urban- and regional-scale models and chemical source-sink models, because of the drastic increase in computational cost when the dimension of uncertain parameters increases. Moreover, the high-dimensional output of global models has to be reduced to allow a computationally reasonable number of polynomials to be generated. This dimensional reduction has been mainly achieved by grouping the model grids into a few regions based on prior knowledge and expectations; urban versus rural for instance. As the model output is used to estimate the coefficients of the polynomial chaos expansion (PCE), the arbitrariness in the regional aggregation can generate problems in estimating uncertainties. To address these issues in a complex model, we apply the probabilistic collocation method of uncertainty analysis to the aerosol representation in MOZART-4, which is a 3D global chemical transport model (Emmons et al., 2010). Thereafter, we deterministically delineate the model output surface into regions of homogeneous response using the method of Principal Component Analysis. This allows the quantification of the uncertainty associated with the dimensional reduction. Because only a bulk mass is calculated online in Mozart-4, a lognormal number distribution is assumed with a priori fixed scale and location parameters, to calculate the surface area for heterogeneous reactions involving tropospheric oxidants. We have applied the PCM to the six parameters of the lognormal number distributions of Black Carbon, Organic Carbon and Sulfate. We have carried out a Monte-Carlo sampling from the probability density functions of the six uncertain parameters, using the reduced PCE model. The global mean concentration of major tropospheric oxidants did not show a

  6. X-ray stereo imaging for micro 3D motions within non-transparent objects

    NASA Astrophysics Data System (ADS)

    Salih, Wasil H. M.; Buytaert, Jan A. N.; Dirckx, Joris J. J.

    2012-03-01

    We propose a new technique to measure the 3D motion of marker points along a straight path within an object using x-ray stereo projections. From recordings of two x-ray projections with 90° separation angle, the 3D coordinates of marker points can be determined. By synchronizing the x-ray exposure time to the motion event, a moving marker leaves a trace in the image of which the gray scale is linearly proportional to the marker velocity. From the gray scale along the motion path, the 3D motion (velocity) is obtained. The path of motion was reconstructed and compared with the applied waveform. The results showed that the accuracy is in order of 5%. The difference of displacement amplitude between the new method and laser vibrometry was less than 5μm. We demonstrated the method on the malleus ossicle motion in the gerbil middle ear as a function of pressure applied on the eardrum. The new method has the advantage over existing methods such as laser vibrometry that the structures under study do not need to be visually exposed. Due to the short measurement time and the high resolution, the method can be useful in the field of biomechanics for a variety of applications.

  7. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  8. Measuring the 3D shape of high temperature objects using blue sinusoidal structured light

    NASA Astrophysics Data System (ADS)

    Zhao, Xianling; Liu, Jiansheng; Zhang, Huayu; Wu, Yingchun

    2015-12-01

    The visible light radiated by some high temperature objects (less than 1200 °C) almost lies in the red and infrared waves. It will interfere with structured light projected on a forging surface if phase measurement profilometry (PMP) is used to measure the shapes of objects. In order to obtain a clear deformed pattern image, a 3D measurement method based on blue sinusoidal structured light is proposed in this present work. Moreover, a method for filtering deformed pattern images is presented for correction of the unwrapping phase. Blue sinusoidal phase-shifting fringe pattern images are projected on the surface by a digital light processing (DLP) projector, and then the deformed patterns are captured by a 3-CCD camera. The deformed pattern images are separated into R, G and B color components by the software. The B color images filtered by a low-pass filter are used to calculate the fringe order. Consequently, the 3D shape of a high temperature object is obtained by the unwrapping phase and the calibration parameter matrixes of the DLP projector and 3-CCD camera. The experimental results show that the unwrapping phase is completely corrected with the filtering method by removing the high frequency noise from the first harmonic of the B color images. The measurement system can complete the measurement in a few seconds with a relative error of less than 1 : 1000.

  9. Calibration target reconstruction for 3-D vision inspection system of large-scale engineering objects

    NASA Astrophysics Data System (ADS)

    Yin, Yongkai; Peng, Xiang; Guan, Yingjian; Liu, Xiaoli; Li, Ameng

    2010-11-01

    It is usually difficult to calibrate the 3-D vision inspection system that may be employed to measure the large-scale engineering objects. One of the challenges is how to in-situ build-up a large and precise calibration target. In this paper, we present a calibration target reconstruction strategy to solve such a problem. First, we choose one of the engineering objects to be inspected as a calibration target, on which we paste coded marks on the object surface. Next, we locate and decode marks to get homologous points. From multiple camera images, the fundamental matrix between adjacent images can be estimated, and then the essential matrix can be derived with priori known camera intrinsic parameters and decomposed to obtain camera extrinsic parameters. Finally, we are able to obtain the initial 3D coordinates with binocular stereo vision reconstruction, and then optimize them with the bundle adjustment by considering the lens distortions, leading to a high-precision calibration target. This reconstruction strategy has been applied to the inspection of an industrial project, from which the proposed method is successfully validated.

  10. 3D Object Recognition using Gabor Feature Extraction and PCA-FLD Projections of Holographically Sensed Data

    NASA Astrophysics Data System (ADS)

    Yeom, Seokwon; Javidi, Bahram

    In this research, a 3D object classification technique using a single hologram has been presented. The PCA-FLD classifier with feature vectors based on Gabor wavelets has been utilized for this purpose. Training and test data of the 3D objects were obtained by computational holographic imaging. We were able to classify 3D objects used in the experiments with a few reconstructed planes of the hologram. The Gabor approach appears to be a good feature extractor for hologram-based 3D classification. The FLD combined with the PCA proved to be a very efficient classifier even with a few training data. Substantial dimensionality reduction was achieved by using the proposed technique for 3D classification problem using holographic imaging. As a consequence, we were able to classify different classes of 3D objects using computer-reconstructed holographic images.

  11. Cosine series representation of 3D curves and its application to white matter fiber bundles in diffusion tensor imaging

    PubMed Central

    Adluru, Nagesh; Lee, Jee Eun; Lazar, Mariana; Lainhart, Janet E.; Alexander, Andrew L.

    2011-01-01

    We present a novel cosine series representation for encoding fiber bundles consisting of multiple 3D curves. The coordinates of curves are parameterized as coefficients of cosine series expansion. We address the issue of registration, averaging and statistical inference on curves in a unified Hilbert space framework. Unlike traditional splines, the proposed method does not have internal knots and explicitly represents curves as a linear combination of cosine basis. This simplicity in the representation enables us to design statistical models, register curves and perform subsequent analysis in a more unified statistical framework than splines. The proposed representation is applied in characterizing abnormal shape of white matter fiber tracts passing through the splenium of the corpus callosum in autistic subjects. For an arbitrary tract, a 19 degree expansion is usually found to be sufficient to reconstruct the tract with 60 parameters. PMID:23316267

  12. Computational hologram synthesis and representation on spatial light modulators for real-time 3D holographic imaging

    NASA Astrophysics Data System (ADS)

    Reichelt, Stephan; Leister, Norbert

    2013-02-01

    In dynamic computer-generated holography that utilizes spatial light modulators, both hologram synthesis and hologram representation are essential in terms of fast computation and high reconstruction quality. For hologram synthesis, i.e. the computation step, Fresnel transform based or point-source based raytracing methods can be applied. In the encoding step, the complex wave-field has to be optimally represented by the SLM with its given modulation capability. For proper hologram reconstruction that implies a simultaneous and independent amplitude and phase modulation of the input wave-field by the SLM. In this paper, we discuss full complex hologram representation methods on SLMs by considering inherent SLM parameter such as modulation type and bit depth on their reconstruction performance such as diffraction efficiency and SNR. We review the three implementation schemes of Burckhardt amplitude-only representation, phase-only macro-pixel representation, and two-phase interference representation. Besides the optical performance we address their hardware complexity and required computational load. Finally, we experimentally demonstrate holographic reconstructions of different representation schemes as obtained by functional prototypes utilizing SeeReal's viewing-window holographic display technology. The proposed hardware implementations enable a fast encoding of complex-valued hologram data and thus will pave the way for commercial real-time holographic 3D imaging in the near future.

  13. Recovery of 3D volume from 2-tone images of novel objects.

    PubMed

    Moore, C; Cavanagh, P

    1998-07-01

    In 2-tone images (e.g., Dallenbach's cow), only two levels of brightness are used to convey image structure-dark object regions and shadows are turned to black and light regions are light regions are turned white. Despite a lack of shading, hue and texture information, many 2-tone images of familiar objects and scenes are accurately interpreted, even by naive observers. Objects frequently appear fully volumetric and are distinct from their shadows. If perceptual interpretation of 2-tone images is accomplished via bottom-up processes on the basis of geometrical structure projected to the image (e.g., volumetric parts, contour and junction information) novel objects should appear volumetric as readily as their familiar counterparts. We demonstrate that accurate volumetric representations are rarely extracted from 2-tone images of novel objects, even when these objects are constructed from volumetric primitives such as generalized cones (Marr, D., Nishihara, H.K., 1978. Proceedings of the Royal Society London 200, 269-294; Biederman, I. 1985. Computer Vision, Graphics, and Image Processing 32, 29-73), or from the rearranged components of a familiar object which is itself recognizable as a 2-tone image. Even familiar volumes such as canonical bricks and cylinders require scenes with redundant structure (e.g., rows of cylinders) or explicit lighting (a lamp in the image) for recovery of global volumetric shape. We conclude that 2-tone image perception is not mediated by bottom-up extraction of geometrical features such as junctions or volumetric parts, but may rely on previously stored representations in memory and a model of the illumination of the scene. The success of this top-down strategy implies it is available for general object recognition in natural scenes. PMID:9735536

  14. 3D Imaging with a Single-Aperture 3-mm Objective Lens: Concept, Fabrication and Test

    NASA Technical Reports Server (NTRS)

    Korniski, Ron; Bae, Sam Y.; Shearn, Mike; Manohara, Harish; Shahinian, Hrayr

    2011-01-01

    There are many advantages to minimally invasive surgery (MIS). An endoscope is the optical system of choice by the surgeon for MIS. The smaller the incision or opening made to perform the surgery, the smaller the optical system needed. For minimally invasive neurological and skull base surgeries the openings are typically 10-mm in diameter (dime sized) or less. The largest outside diameter (OD) endoscope used is 4mm. A significant drawback to endoscopic MIS is that it only provides a monocular view of the surgical site thereby lacking depth information for the surgeon. A stereo view would provide the surgeon instantaneous depth information of the surroundings within the field of view, a significant advantage especially during brain surgery. Providing 3D imaging in an endoscopic objective lens system presents significant challenges because of the tight packaging constraints. This paper presents a promising new technique for endoscopic 3D imaging that uses a single lens system with complementary multi-bandpass filters (CMBFs), and describes the proof-of-concept demonstrations performed to date validating the technique. These demonstrations of the technique have utilized many commercial off-the-shelf (COTS) components including the ones used in the endoscope objective.

  15. Recognizing Objects in 3D Point Clouds with Multi-Scale Local Features

    PubMed Central

    Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie

    2014-01-01

    Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms. PMID:25517694

  16. A method of 3D object recognition and localization in a cloud of points

    NASA Astrophysics Data System (ADS)

    Bielicki, Jerzy; Sitnik, Robert

    2013-12-01

    The proposed method given in this article is prepared for analysis of data in the form of cloud of points directly from 3D measurements. It is designed for use in the end-user applications that can directly be integrated with 3D scanning software. The method utilizes locally calculated feature vectors (FVs) in point cloud data. Recognition is based on comparison of the analyzed scene with reference object library. A global descriptor in the form of a set of spatially distributed FVs is created for each reference model. During the detection process, correlation of subsets of reference FVs with FVs calculated in the scene is computed. Features utilized in the algorithm are based on parameters, which qualitatively estimate mean and Gaussian curvatures. Replacement of differentiation with averaging in the curvatures estimation makes the algorithm more resistant to discontinuities and poor quality of the input data. Utilization of the FV subsets allows to detect partially occluded and cluttered objects in the scene, while additional spatial information maintains false positive rate at a reasonably low level.

  17. Object-adaptive depth compensated inter prediction for depth video coding in 3D video system

    NASA Astrophysics Data System (ADS)

    Kang, Min-Koo; Lee, Jaejoon; Lim, Ilsoon; Ho, Yo-Sung

    2011-01-01

    Nowadays, the 3D video system using the MVD (multi-view video plus depth) data format is being actively studied. The system has many advantages with respect to virtual view synthesis such as an auto-stereoscopic functionality, but compression of huge input data remains a problem. Therefore, efficient 3D data compression is extremely important in the system, and problems of low temporal consistency and viewpoint correlation should be resolved for efficient depth video coding. In this paper, we propose an object-adaptive depth compensated inter prediction method to resolve the problems where object-adaptive mean-depth difference between a current block, to be coded, and a reference block are compensated during inter prediction. In addition, unique properties of depth video are exploited to reduce side information required for signaling decoder to conduct the same process. To evaluate the coding performance, we have implemented the proposed method into MVC (multiview video coding) reference software, JMVC 8.2. Experimental results have demonstrated that our proposed method is especially efficient for depth videos estimated by DERS (depth estimation reference software) discussed in the MPEG 3DV coding group. The coding gain was up to 11.69% bit-saving, and it was even increased when we evaluated it on synthesized views of virtual viewpoints.

  18. Recognizing objects in 3D point clouds with multi-scale local features.

    PubMed

    Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie

    2014-01-01

    Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms. PMID:25517694

  19. Representation of chemical information in OASIS centralized 3D database for existing chemicals.

    PubMed

    Nikolov, Nikolai; Grancharov, Vanio; Stoyanova, Galya; Pavlov, Todor; Mekenyan, Ovanes

    2006-01-01

    The present inventory of existing chemicals in regulatory agencies in North America and Europe, encompassing the chemicals of the European Chemicals Bureau (EINECS, with 61 573 discrete chemicals); the Danish EPA (159 448 chemicals); the U.S. EPA (TSCA, 56 882 chemicals; HPVC, 10 546 chemicals) and pesticides' active and inactive ingredients of the U.S. EPA (1379 chemicals); the Organization for Economic Cooperation and Development (HPVC, 4750 chemicals); Environment Canada (DSL, 10851 chemicals); and the Japanese Ministry of Economy, Trade, and Industry (16811), was combined in a centralized 3D database for existing chemicals. The total number of unique chemicals from all of these databases exceeded 185 500. Defined and undefined chemical mixtures and polymers are handled, along with discrete (hydrolyzing and nonhydrolyzing) chemicals. The database manager provides the storage and retrieval of chemical structures with 2D and 3D data, accounting for molecular flexibility by using representative sets of conformers for each chemical. The electronic and geometric structures of all conformers are quantum-chemically optimized and evaluated. Hence, the database contains over 3.7 million 3D records with hundreds of millions of descriptor data items at the levels of structures, conformers, or atoms. The platform contains a highly developed search subsystem--a search is possible on Chemical Abstracts Service numbers; names; 2D and 3D fragment searches; structural, conformational, or atomic properties; affiliation in other chemical databases; structure similarity; logical combinations; saved queries; and search result exports. Models (collections of logically related descriptors) are supported, including information on a model's author, date, bioassay, organs/tissues, conditions, administration, and so forth. Fragments can be interactively constructed using a visual structure editor. A configurable database browser is designed for the inspection and editing of all types of

  20. Active learning in the lecture theatre using 3D printed objects.

    PubMed

    Smith, David P

    2016-01-01

    The ability to conceptualize 3D shapes is central to understanding biological processes. The concept that the structure of a biological molecule leads to function is a core principle of the biochemical field. Visualisation of biological molecules often involves vocal explanations or the use of two dimensional slides and video presentations. A deeper understanding of these molecules can however be obtained by the handling of objects. 3D printed biological molecules can be used as active learning tools to stimulate engagement in large group lectures. These models can be used to build upon initial core knowledge which can be delivered in either a flipped form or a more didactic manner. Within the teaching session the students are able to learn by handling, rotating and viewing the objects to gain an appreciation, for example, of an enzyme's active site or the difference between the major and minor groove of DNA. Models and other artefacts can be handled in small groups within a lecture theatre and act as a focal point to generate conversation. Through the approach presented here core knowledge is first established and then supplemented with high level problem solving through a "Think-Pair-Share" cooperative learning strategy. The teaching delivery was adjusted based around experiential learning activities by moving the object from mental cognition and into the physical environment. This approach led to students being able to better visualise biological molecules and a positive engagement in the lecture. The use of objects in teaching allows the lecturer to create interactive sessions that both challenge and enable the student. PMID:27366318

  1. Knowledge guided object detection and identification in 3D point clouds

    NASA Astrophysics Data System (ADS)

    Karmacharya, A.; Boochs, F.; Tietz, B.

    2015-05-01

    Modern instruments like laser scanner and 3D cameras or image based techniques like structure from motion produce huge point clouds as base for further object analysis. This has considerably changed the way of data compilation away from selective manually guided processes towards automatic and computer supported strategies. However it's still a long way to achieve the quality and robustness of manual processes as data sets are mostly very complex. Looking at existing strategies 3D data processing for object detections and reconstruction rely heavily on either data driven or model driven approaches. These approaches come with their limitation on depending highly on the nature of data and inability to handle any deviation. Furthermore, the lack of capabilities to integrate other data or information in between the processing steps further exposes their limitations. This restricts the approaches to be executed with strict predefined strategy and does not allow deviations when and if new unexpected situations arise. We propose a solution that induces intelligence in the processing activities through the usage of semantics. The solution binds the objects along with other related knowledge domains to the numerical processing to facilitate the detection of geometries and then uses experts' inference rules to annotate them. The solution was tested within the prototypical application of the research project "Wissensbasierte Detektion von Objekten in Punktwolken für Anwendungen im Ingenieurbereich (WiDOP)". The flexibility of the solution is demonstrated through two entirely different USE Case scenarios: Deutsche Bahn (German Railway System) for the outdoor scenarios and Fraport (Frankfort Airport) for the indoor scenarios. Apart from the difference in their environments, they provide different conditions, which the solution needs to consider. While locations of the objects in Fraport were previously known, that of DB were not known at the beginning.

  2. Active learning in the lecture theatre using 3D printed objects

    PubMed Central

    Smith, David P.

    2016-01-01

    The ability to conceptualize 3D shapes is central to understanding biological processes. The concept that the structure of a biological molecule leads to function is a core principle of the biochemical field. Visualisation of biological molecules often involves vocal explanations or the use of two dimensional slides and video presentations. A deeper understanding of these molecules can however be obtained by the handling of objects. 3D printed biological molecules can be used as active learning tools to stimulate engagement in large group lectures. These models can be used to build upon initial core knowledge which can be delivered in either a flipped form or a more didactic manner. Within the teaching session the students are able to learn by handling, rotating and viewing the objects to gain an appreciation, for example, of an enzyme’s active site or the difference between the major and minor groove of DNA. Models and other artefacts can be handled in small groups within a lecture theatre and act as a focal point to generate conversation. Through the approach presented here core knowledge is first established and then supplemented with high level problem solving through a "Think-Pair-Share" cooperative learning strategy. The teaching delivery was adjusted based around experiential learning activities by moving the object from mental cognition and into the physical environment. This approach led to students being able to better visualise biological molecules and a positive engagement in the lecture. The use of objects in teaching allows the lecturer to create interactive sessions that both challenge and enable the student. PMID:27366318

  3. Laser Scanning for 3D Object Characterization: Infrastructure for Exploration and Analysis of Vegetation Signatures

    NASA Astrophysics Data System (ADS)

    Koenig, K.; Höfle, B.

    2012-04-01

    Mapping and characterization of the three-dimensional nature of vegetation is increasingly gaining in importance. Deeper insight is required for e.g. forest management, biodiversity assessment, habitat analysis, precision agriculture, renewable energy production or the analysis of interaction between biosphere and atmosphere. However the potential of 3D vegetation characterization has not been exploited so far and new technologies are needed. Laser scanning has evolved into the state-of-the-art technology for highly accurate 3D data acquisition. By now several studies indicated a high value of 3D vegetation description by using laser data. The laser sensors provide a detailed geometric presentation (geometric information) of scanned objects as well as a full profile of laser energy that was scattered back to the sensor (radiometric information). In order to exploit the full potential of these datasets, profound knowledge on laser scanning technology for data acquisition, geoinformation technology for data analysis and object of interest (e.g. vegetation) for data interpretation have to be joined. A signature database is a collection of signatures of reference vegetation objects acquired under known conditions and sensor parameters and can be used to improve information extraction from unclassified vegetation datasets. Different vegetation elements (leaves, branches, etc.) at different heights above ground with different geometric composition contribute to the overall description (i.e. signature) of the scanned object. The developed tools allow analyzing tree objects according to single features (e.g. echo width and signal amplitude) and to any relation of features and derived statistical values (e.g. ratio of laser point attributes). For example, a single backscatter cross section value does not allow for tree species determination, whereas the average echo width per tree segment can give good estimates. Statistical values and/or distributions (e.g. Gaussian

  4. Correlative nanoscale 3D imaging of structure and composition in extended objects.

    PubMed

    Xu, Feng; Helfen, Lukas; Suhonen, Heikki; Elgrabli, Dan; Bayat, Sam; Reischig, Péter; Baumbach, Tilo; Cloetens, Peter

    2012-01-01

    Structure and composition at the nanoscale determine the behavior of biological systems and engineered materials. The drive to understand and control this behavior has placed strong demands on developing methods for high resolution imaging. In general, the improvement of three-dimensional (3D) resolution is accomplished by tightening constraints: reduced manageable specimen sizes, decreasing analyzable volumes, degrading contrasts, and increasing sample preparation efforts. Aiming to overcome these limitations, we present a non-destructive and multiple-contrast imaging technique, using principles of X-ray laminography, thus generalizing tomography towards laterally extended objects. We retain advantages that are usually restricted to 2D microscopic imaging, such as scanning of large areas and subsequent zooming-in towards a region of interest at the highest possible resolution. Our technique permits correlating the 3D structure and the elemental distribution yielding a high sensitivity to variations of the electron density via coherent imaging and to local trace element quantification through X-ray fluorescence. We demonstrate the method by imaging a lithographic nanostructure and an aluminum alloy. Analyzing a biological system, we visualize in lung tissue the subcellular response to toxic stress after exposure to nanotubes. We show that most of the nanotubes are trapped inside alveolar macrophages, while a small portion of the nanotubes has crossed the barrier to the cellular space of the alveolar wall. In general, our method is non-destructive and can be combined with different sample environmental or loading conditions. We therefore anticipate that correlative X-ray nano-laminography will enable a variety of in situ and in operando 3D studies. PMID:23185554

  5. Correlative Nanoscale 3D Imaging of Structure and Composition in Extended Objects

    PubMed Central

    Xu, Feng; Helfen, Lukas; Suhonen, Heikki; Elgrabli, Dan; Bayat, Sam; Reischig, Péter; Baumbach, Tilo; Cloetens, Peter

    2012-01-01

    Structure and composition at the nanoscale determine the behavior of biological systems and engineered materials. The drive to understand and control this behavior has placed strong demands on developing methods for high resolution imaging. In general, the improvement of three-dimensional (3D) resolution is accomplished by tightening constraints: reduced manageable specimen sizes, decreasing analyzable volumes, degrading contrasts, and increasing sample preparation efforts. Aiming to overcome these limitations, we present a non-destructive and multiple-contrast imaging technique, using principles of X-ray laminography, thus generalizing tomography towards laterally extended objects. We retain advantages that are usually restricted to 2D microscopic imaging, such as scanning of large areas and subsequent zooming-in towards a region of interest at the highest possible resolution. Our technique permits correlating the 3D structure and the elemental distribution yielding a high sensitivity to variations of the electron density via coherent imaging and to local trace element quantification through X-ray fluorescence. We demonstrate the method by imaging a lithographic nanostructure and an aluminum alloy. Analyzing a biological system, we visualize in lung tissue the subcellular response to toxic stress after exposure to nanotubes. We show that most of the nanotubes are trapped inside alveolar macrophages, while a small portion of the nanotubes has crossed the barrier to the cellular space of the alveolar wall. In general, our method is non-destructive and can be combined with different sample environmental or loading conditions. We therefore anticipate that correlative X-ray nano-laminography will enable a variety of in situ and in operando 3D studies. PMID:23185554

  6. 3-D representation of aquitard topography using ground-penetrating radar

    SciTech Connect

    Young, R.A.; Sun, Jingsheng

    1995-12-31

    The topography of a clay aquitard is defined by 3D Ground Penetrating Radar (GPR) data at Hill Air Force Base, Utah. Conventional processing augmented by multichannel domain filtering shows a strong reflection from a depth of 20-30 ft despite attenuation by an artificial clay cap approximately 2 ft thick. This reflection correlates very closely with the top of the aquitard as seen in lithology logs at 3 wells crossed by common offset radar profiles from the 3D dataset. Lateral and vertical resolution along the boundary are approximately 2 ft and 1 ft, respectively. The boundary shows abrupt topographic variation of 5 ft over horizontal distances of 20 ft or less and is probably due to vigorous erosion by streams during lowstands of ancient Lake Bonneville. This irregular topography may provide depressions for accumulation of hydrocarbons and chlorinated organic pollutants. A ridge running the length of the survey area may channel movement of ground water and of hydrocarbons trapped at the surface of the water table. Depth slices through a 3D volume, and picked points along the aquitard displayed in depth and relative elevation perspectives provide much more useful visualization than several 2D lines by themselves. The three-dimensional CPR image provides far more detailed definition of geologic boundaries than does projection of soil boring logs into two-dimensional profiles.

  7. Dynamic shape modeling of the mitral valve from real-time 3D ultrasound images using continuous medial representation

    NASA Astrophysics Data System (ADS)

    Pouch, Alison M.; Yushkevich, Paul A.; Jackson, Benjamin M.; Gorman, Joseph H., III; Gorman, Robert C.; Sehgal, Chandra M.

    2012-03-01

    Purpose: Patient-specific shape analysis of the mitral valve from real-time 3D ultrasound (rt-3DUS) has broad application to the assessment and surgical treatment of mitral valve disease. Our goal is to demonstrate that continuous medial representation (cm-rep) is an accurate valve shape representation that can be used for statistical shape modeling over the cardiac cycle from rt-3DUS images. Methods: Transesophageal rt-3DUS data acquired from 15 subjects with a range of mitral valve pathology were analyzed. User-initialized segmentation with level sets and symmetric diffeomorphic normalization delineated the mitral leaflets at each time point in the rt-3DUS data series. A deformable cm-rep was fitted to each segmented image of the mitral leaflets in the time series, producing a 4D parametric representation of valve shape in a single cardiac cycle. Model fitting accuracy was evaluated by the Dice overlap, and shape interpolation and principal component analysis (PCA) of 4D valve shape were performed. Results: Of the 289 3D images analyzed, the average Dice overlap between each fitted cm-rep and its target segmentation was 0.880+/-0.018 (max=0.912, min=0.819). The results of PCA represented variability in valve morphology and localized leaflet thickness across subjects. Conclusion: Deformable medial modeling accurately captures valve geometry in rt-3DUS images over the entire cardiac cycle and enables statistical shape analysis of the mitral valve.

  8. A 3D interactive multi-object segmentation tool using local robust statistics driven active contours.

    PubMed

    Gao, Yi; Kikinis, Ron; Bouix, Sylvain; Shenton, Martha; Tannenbaum, Allen

    2012-08-01

    Extracting anatomical and functional significant structures renders one of the important tasks for both the theoretical study of the medical image analysis, and the clinical and practical community. In the past, much work has been dedicated only to the algorithmic development. Nevertheless, for clinical end users, a well designed algorithm with an interactive software is necessary for an algorithm to be utilized in their daily work. Furthermore, the software would better be open sourced in order to be used and validated by not only the authors but also the entire community. Therefore, the contribution of the present work is twofolds: first, we propose a new robust statistics based conformal metric and the conformal area driven multiple active contour framework, to simultaneously extract multiple targets from MR and CT medical imagery in 3D. Second, an open source graphically interactive 3D segmentation tool based on the aforementioned contour evolution is implemented and is publicly available for end users on multiple platforms. In using this software for the segmentation task, the process is initiated by the user drawn strokes (seeds) in the target region in the image. Then, the local robust statistics are used to describe the object features, and such features are learned adaptively from the seeds under a non-parametric estimation scheme. Subsequently, several active contours evolve simultaneously with their interactions being motivated by the principles of action and reaction-this not only guarantees mutual exclusiveness among the contours, but also no longer relies upon the assumption that the multiple objects fill the entire image domain, which was tacitly or explicitly assumed in many previous works. In doing so, the contours interact and converge to equilibrium at the desired positions of the desired multiple objects. Furthermore, with the aim of not only validating the algorithm and the software, but also demonstrating how the tool is to be used, we provide

  9. Benchmarking of HPCC: A novel 3D molecular representation combining shape and pharmacophoric descriptors for efficient molecular similarity assessments.

    PubMed

    Karaboga, Arnaud S; Petronin, Florent; Marchetti, Gino; Souchet, Michel; Maigret, Bernard

    2013-04-01

    Since 3D molecular shape is an important determinant of biological activity, designing accurate 3D molecular representations is still of high interest. Several chemoinformatic approaches have been developed to try to describe accurate molecular shapes. Here, we present a novel 3D molecular description, namely harmonic pharma chemistry coefficient (HPCC), combining a ligand-centric pharmacophoric description projected onto a spherical harmonic based shape of a ligand. The performance of HPCC was evaluated by comparison to the standard ROCS software in a ligand-based virtual screening (VS) approach using the publicly available directory of useful decoys (DUD) data set comprising over 100,000 compounds distributed across 40 protein targets. Our results were analyzed using commonly reported statistics such as the area under the curve (AUC) and normalized sum of logarithms of ranks (NSLR) metrics. Overall, our HPCC 3D method is globally as efficient as the state-of-the-art ROCS software in terms of enrichment and slightly better for more than half of the DUD targets. Since it is largely admitted that VS results depend strongly on the nature of the protein families, we believe that the present HPCC solution is of interest over the current ligand-based VS methods. PMID:23467019

  10. Limits on Infants' Ability to Dynamically Update Object Representations

    ERIC Educational Resources Information Center

    Feigenson, Lisa; Yamaguchi, Mariko

    2009-01-01

    Like adults, infants use working memory to represent occluded objects and can update these memory representations to reflect changes to a scene that unfold over time. Here we tested the limits of infants' ability to update object representations in working memory. Eleven-month-old infants participated in a modified foraging task in which they saw…

  11. A 3D approach for object recognition in illuminated scenes with adaptive correlation filters

    NASA Astrophysics Data System (ADS)

    Picos, Kenia; Díaz-Ramírez, Víctor H.

    2015-09-01

    In this paper we solve the problem of pose recognition of a 3D object in non-uniformly illuminated and noisy scenes. The recognition system employs a bank of space-variant correlation filters constructed with an adaptive approach based on local statistical parameters of the input scene. The position and orientation of the target are estimated with the help of the filter bank. For an observed input frame, the algorithm computes the correlation process between the observed image and the bank of filters using a combination of data and task parallelism by taking advantage of a graphics processing unit (GPU) architecture. The pose of the target is estimated by finding the template that better matches the current view of target within the scene. The performance of the proposed system is evaluated in terms of recognition accuracy, location and orientation errors, and computational performance.

  12. Recognition of 3-D symmetric objects from range images in automated assembly tasks

    NASA Technical Reports Server (NTRS)

    Alvertos, Nicolas; Dcunha, Ivan

    1990-01-01

    A new technique is presented for the three dimensional recognition of symmetric objects from range images. Beginning from the implicit representation of quadrics, a set of ten coefficients is determined for symmetric objects like spheres, cones, cylinders, ellipsoids, and parallelepipeds. Instead of using these ten coefficients trying to fit them to smooth surfaces (patches) based on the traditional way of determining curvatures, a new approach based on two dimensional geometry is used. For each symmetric object, a unique set of two dimensional curves is obtained from the various angles at which the object is intersected with a plane. Using the same ten coefficients obtained earlier and based on the discriminant method, each of these curves is classified as a parabola, circle, ellipse, or hyperbola. Each symmetric object is found to possess a unique set of these two dimensional curves whereby it can be differentiated from the others. It is shown that instead of using the three dimensional discriminant which involves evaluation of the rank of its matrix, it is sufficient to use the two dimensional discriminant which only requires three arithmetic operations.

  13. Performance analysis of different surface reconstruction algorithms for 3D reconstruction of outdoor objects from their digital images.

    PubMed

    Maiti, Abhik; Chakravarty, Debashish

    2016-01-01

    3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality. PMID:27386376

  14. Visuo-haptic multisensory object recognition, categorization, and representation

    PubMed Central

    Lacey, Simon; Sathian, K.

    2014-01-01

    Visual and haptic unisensory object processing show many similarities in terms of categorization, recognition, and representation. In this review, we discuss how these similarities contribute to multisensory object processing. In particular, we show that similar unisensory visual and haptic representations lead to a shared multisensory representation underlying both cross-modal object recognition and view-independence. This shared representation suggests a common neural substrate and we review several candidate brain regions, previously thought to be specialized for aspects of visual processing, that are now known also to be involved in analogous haptic tasks. Finally, we lay out the evidence for a model of multisensory object recognition in which top-down and bottom-up pathways to the object-selective lateral occipital complex are modulated by object familiarity and individual differences in object and spatial imagery. PMID:25101014

  15. Ball-scale based hierarchical multi-object recognition in 3D medical images

    NASA Astrophysics Data System (ADS)

    Bağci, Ulas; Udupa, Jayaram K.; Chen, Xinjian

    2010-03-01

    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate the following: (1) Incorporating a large number of objects improves the recognition accuracy dramatically. (2) The recognition algorithm can be thought as a hierarchical framework such that quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest recognition. (3) Scale yields useful information about the relationship between the model assembly and any given image such that the recognition results in a placement of the model close to the actual pose without doing any elaborate searches or optimization. (4) Effective object recognition can make delineation most accurate.

  16. Multisensory Part-based Representations of Objects in Human Lateral Occipital Cortex.

    PubMed

    Erdogan, Goker; Chen, Quanjing; Garcea, Frank E; Mahon, Bradford Z; Jacobs, Robert A

    2016-06-01

    The format of high-level object representations in temporal-occipital cortex is a fundamental and as yet unresolved issue. Here we use fMRI to show that human lateral occipital cortex (LOC) encodes novel 3-D objects in a multisensory and part-based format. We show that visual and haptic exploration of objects leads to similar patterns of neural activity in human LOC and that the shared variance between visually and haptically induced patterns of BOLD contrast in LOC reflects the part structure of the objects. We also show that linear classifiers trained on neural data from LOC on a subset of the objects successfully predict a novel object based on its component part structure. These data demonstrate a multisensory code for object representations in LOC that specifies the part structure of objects. PMID:26918587

  17. Automatic detection of anatomical features on 3D ear impressions for canonical representation.

    PubMed

    Baloch, Sajjad; Melkisetoglu, Rupen; Flöry, Simon; Azernikov, Sergei; Slabaugh, Greg; Zouhar, Alexander; Fang, Tong

    2010-01-01

    We propose a shape descriptor for 3D ear impressions, derived from a comprehensive set of anatomical features. Motivated by hearing aid (HA) manufacturing, the selection of the anatomical features is carried out according to their uniqueness and importance in HA design. This leads to a canonical ear signature that is highly distinctive and potentially well suited for classification. First, the anatomical features are characterized into generic topological and geometric features, namely concavities, elbows, ridges, peaks, and bumps on the surface of the ear. Fast and robust algorithms are then developed for their detection. This indirect approach ensures the generality of the algorithms with potential applications in biomedicine, biometrics, and reverse engineering. PMID:20879444

  18. A contest of sensors in close range 3D imaging: performance evaluation with a new metric test object

    NASA Astrophysics Data System (ADS)

    Hess, M.; Robson, S.; Hosseininaveh Ahmadabadian, A.

    2014-06-01

    An independent means of 3D image quality assessment is introduced, addressing non-professional users of sensors and freeware, which is largely characterized as closed-sourced and by the absence of quality metrics for processing steps, such as alignment. A performance evaluation of commercially available, state-of-the-art close range 3D imaging technologies is demonstrated with the help of a newly developed Portable Metric Test Artefact. The use of this test object provides quality control by a quantitative assessment of 3D imaging sensors. It will enable users to give precise specifications which spatial resolution and geometry recording they expect as outcome from their 3D digitizing process. This will lead to the creation of high-quality 3D digital surrogates and 3D digital assets. The paper is presented in the form of a competition of teams, and a possible winner will emerge.

  19. Method for contour extraction for object representation

    DOEpatents

    Skourikhine, Alexei N.; Prasad, Lakshman

    2005-08-30

    Contours are extracted for representing a pixelated object in a background pixel field. An object pixel is located that is the start of a new contour for the object and identifying that pixel as the first pixel of the new contour. A first contour point is then located on the mid-point of a transition edge of the first pixel. A tracing direction from the first contour point is determined for tracing the new contour. Contour points on mid-points of pixel transition edges are sequentially located along the tracing direction until the first contour point is again encountered to complete tracing the new contour. The new contour is then added to a list of extracted contours that represent the object. The contour extraction process associates regions and contours by labeling all the contours belonging to the same object with the same label.

  20. The role of the foreshortening cue in the perception of 3D object slant.

    PubMed

    Ivanov, Iliya V; Kramer, Daniel J; Mullen, Kathy T

    2014-01-01

    Slant is the degree to which a surface recedes or slopes away from the observer about the horizontal axis. The perception of surface slant may be derived from static monocular cues, including linear perspective and foreshortening, applied to single shapes or to multi-element textures. It is still unclear the extent to which color vision can use these cues to determine slant in the absence of achromatic contrast. Although previous demonstrations have shown that some pictures and images may lose their depth when presented at isoluminance, this has not been tested systematically using stimuli within the spatio-temporal passband of color vision. Here we test whether the foreshortening cue from surface compression (change in the ratio of width to length) can induce slant perception for single shapes for both color and luminance vision. We use radial frequency patterns with narrowband spatio-temporal properties. In the first experiment, both a manual task (lever rotation) and a visual task (line rotation) are used as metrics to measure the perception of slant for achromatic, red-green isoluminant and S-cone isolating stimuli. In the second experiment, we measure slant discrimination thresholds as a function of depicted slant in a 2AFC paradigm and find similar thresholds for chromatic and achromatic stimuli. We conclude that both color and luminance vision can use the foreshortening of a single surface to perceive slant, with performances similar to those obtained using other strong cues for slant, such as texture. This has implications for the role of color in monocular 3D vision, and the cortical organization used in 3D object perception. PMID:24216007

  1. An objective method for 3D quality prediction using visual annoyance and acceptability level

    NASA Astrophysics Data System (ADS)

    Khaustova, Darya; Fournier, Jérôme; Wyckens, Emmanuel; Le Meur, Olivier

    2015-03-01

    This study proposes a new objective metric for video quality assessment. It predicts the impact of technical quality parameters relevant to visual discomfort on human perception. The proposed metric is based on a 3-level color scale: (1) Green - not annoying, (2) Orange - annoying but acceptable, (3) Red - not acceptable. Therefore, each color category reflects viewers' judgment based on stimulus acceptability and induced visual annoyance. The boundary between the "Green" and "Orange" categories defines the visual annoyance threshold, while the boundary between the "Orange" and "Red" categories defines the acceptability threshold. Once the technical quality parameters are measured, they are compared to perceptual thresholds. Such comparison allows estimating the quality of the 3D video sequence. Besides, the proposed metric is adjustable to service or production requirements by changing the percentage of acceptability and/or visual annoyance. The performance of the metric is evaluated in a subjective experiment that uses three stereoscopic scenes. Five view asymmetries with four degradation levels were introduced into initial test content. The results demonstrate high correlations between subjective scores and objective predictions for all view asymmetries.

  2. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    PubMed

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees' flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots. PMID:26886006

  3. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    PubMed Central

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees’ flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots. PMID:26886006

  4. SEISVIZ3D: Stereoscopic system for the representation of seismic data - Interpretation and Immersion

    NASA Astrophysics Data System (ADS)

    von Hartmann, Hartwig; Rilling, Stefan; Bogen, Manfred; Thomas, Rüdiger

    2015-04-01

    The seismic method is a valuable tool for getting 3D-images from the subsurface. Seismic data acquisition today is not only a topic for oil and gas exploration but is used also for geothermal exploration, inspections of nuclear waste sites and for scientific investigations. The system presented in this contribution may also have an impact on the visualization of 3D-data of other geophysical methods. 3D-seismic data can be displayed in different ways to give a spatial impression of the subsurface.They are a combination of individual vertical cuts, possibly linked to a cubical portion of the data volume, and the stereoscopic view of the seismic data. By these methods, the spatial perception for the structures and thus of the processes in the subsurface should be increased. Stereoscopic techniques are e. g. implemented in the CAVE and the WALL, both of which require a lot of space and high technical effort. The aim of the interpretation system shown here is stereoscopic visualization of seismic data at the workplace, i.e. at the personal workstation and monitor. The system was developed with following criteria in mind: • Fast rendering of large amounts of data so that a continuous view of the data when changing the viewing angle and the data section is possible, • defining areas in stereoscopic view to translate the spatial impression directly into an interpretation, • the development of an appropriate user interface, including head-tracking, for handling the increased degrees of freedom, • the possibility of collaboration, i.e. teamwork and idea exchange with the simultaneous viewing of a scene at remote locations. The possibilities offered by the use of a stereoscopic system do not replace a conventional interpretation workflow. Rather they have to be implemented into it as an additional step. The amplitude distribution of the seismic data is a challenge for the stereoscopic display because the opacity level and the scaling and selection of the data have to

  5. Generating virtual textile composite specimens using statistical data from micro-computed tomography: 3D tow representations

    NASA Astrophysics Data System (ADS)

    Rinaldi, Renaud G.; Blacklock, Matthew; Bale, Hrishikesh; Begley, Matthew R.; Cox, Brian N.

    2012-08-01

    Recent work presented a Monte Carlo algorithm based on Markov Chain operators for generating replicas of textile composite specimens that possess the same statistical characteristics as specimens imaged using high resolution x-ray computed tomography. That work represented the textile reinforcement by one-dimensional tow loci in three-dimensional space, suitable for use in the Binary Model of textile composites. Here analogous algorithms are used to generate solid, three-dimensional (3D) tow representations, to provide geometrical models for more detailed failure analyses. The algorithms for generating 3D models are divided into those that refer to the topology of the textile and those that deal with its geometry. The topological rules carry all the information that distinguishes textiles with different interlacing patterns (weaves, braids, etc.) and provide instructions for resolving interpenetrations or ordering errors among tows. They also simplify writing a single computer program that can accept input data for generic textile cases. The geometrical rules adjust the shape and smoothness of the generated virtual specimens to match data from imaged specimens. The virtual specimen generator is illustrated using data for an angle interlock weave, a common 3D textile architecture.

  6. A Dynamic 3D Graphical Representation for RNA Structure Analysis and Its Application in Non-Coding RNA Classification

    PubMed Central

    Dong, Xiaoqing; Fang, Yiliang; Wang, Kejing; Zhu, Lijuan; Wang, Ke; Huang, Tao

    2016-01-01

    With the development of new technologies in transcriptome and epigenetics, RNAs have been identified to play more and more important roles in life processes. Consequently, various methods have been proposed to assess the biological functions of RNAs and thus classify them functionally, among which comparative study of RNA structures is perhaps the most important one. To measure the structural similarity of RNAs and classify them, we propose a novel three dimensional (3D) graphical representation of RNA secondary structure, in which an RNA secondary structure is first transformed into a characteristic sequence based on chemical property of nucleic acids; a dynamic 3D graph is then constructed for the characteristic sequence; and lastly a numerical characterization of the 3D graph is used to represent the RNA secondary structure. We tested our algorithm on three datasets: (1) Dataset I consisting of nine RNA secondary structures of viruses, (2) Dataset II consisting of complex RNA secondary structures including pseudo-knots, and (3) Dataset III consisting of 18 non-coding RNA families. We also compare our method with other nine existing methods using Dataset II and III. The results demonstrate that our method is better than other methods in similarity measurement and classification of RNA secondary structures. PMID:27213271

  7. A Dynamic 3D Graphical Representation for RNA Structure Analysis and Its Application in Non-Coding RNA Classification.

    PubMed

    Zhang, Yi; Huang, Haiyun; Dong, Xiaoqing; Fang, Yiliang; Wang, Kejing; Zhu, Lijuan; Wang, Ke; Huang, Tao; Yang, Jialiang

    2016-01-01

    With the development of new technologies in transcriptome and epigenetics, RNAs have been identified to play more and more important roles in life processes. Consequently, various methods have been proposed to assess the biological functions of RNAs and thus classify them functionally, among which comparative study of RNA structures is perhaps the most important one. To measure the structural similarity of RNAs and classify them, we propose a novel three dimensional (3D) graphical representation of RNA secondary structure, in which an RNA secondary structure is first transformed into a characteristic sequence based on chemical property of nucleic acids; a dynamic 3D graph is then constructed for the characteristic sequence; and lastly a numerical characterization of the 3D graph is used to represent the RNA secondary structure. We tested our algorithm on three datasets: (1) Dataset I consisting of nine RNA secondary structures of viruses, (2) Dataset II consisting of complex RNA secondary structures including pseudo-knots, and (3) Dataset III consisting of 18 non-coding RNA families. We also compare our method with other nine existing methods using Dataset II and III. The results demonstrate that our method is better than other methods in similarity measurement and classification of RNA secondary structures. PMID:27213271

  8. Spatiotemporal representation of 3D hand trajectory based on beta-elliptic models.

    PubMed

    Boubaker, Houcine; Rezzoug, Nasser; Kherallah, Monji; Gorce, Philippe; Alimi, Adel M

    2015-01-01

    The aim of this paper was to model the hand trajectory during grasping by an extension in 3D of the 2D written language beta-elliptic model. The interest of this model is that it takes into account both geometric and velocity information. The method relies on the decomposition of the task space trajectories in elementary bricks. The latter is characterized by a velocity profile modelled with beta functions and a geometry modelled with elliptic shapes. A data base of grasping movements has been constructed and the errors of reconstruction were assessed (distance and curvature) considering two variations of the beta-elliptic model ('quarter ellipse' and 'two tangents points' method). The results showed that the method based on two tangent points outperforms the quarter ellipse method with average and maximum relative errors of 2.73% and 8.62%, respectively, and a maximum curvature error of 9.26% for the former. This modelling approach can find interesting application to characterize the improvement due to a rehabilitation or teaching process by a quantitative measurement of hand trajectory parameters. PMID:25199025

  9. High Resolution Ultrasonic Method for 3D Fingerprint Representation in Biometrics

    NASA Astrophysics Data System (ADS)

    Maev, R. Gr.; Bakulin, E. Y.; Maeva, E. Y.; Severin, F. M.

    Biometrics is an important field which studies different possible ways of personal identification. Among a number of existing biometric techniques fingerprint recognition stands alone - because very large database of fingerprints has already been acquired. Also, fingerprints are an important evidence that can be collected at a crime scene. Therefore, of all automated biometric techniques, especially in the field of law enforcement, fingerprint identification seems to be the most promising. Ultrasonic method of fingerprint imaging was originally introduced over a decade as the mapping of the reflection coefficient at the interface between the finger and a covering plate and has shown very good reliability and free from imperfections of previous two methods. This work introduces a newer development of the ultrasonic fingerprint imaging, focusing on the imaging of the internal structures of fingerprints (including sweat pores) with raw acoustic resolution of about 500 dpi (0.05 mm) using a scanning acoustic microscope to obtain images and acoustic data in the form of 3D data array. C-scans from different depths inside the fingerprint area of fingers of several volunteers were obtained and showed good contrast of ridges-and-valleys patterns and practically exact correspondence to the standard ink-and-paper prints of the same areas. Important feature reveled on the acoustic images was the clear appearance of the sweat pores, which could provide additional means of identification.

  10. Object representation in the human auditory system

    PubMed Central

    Winkler, István; van Zuijen, Titia L.; Sussman, Elyse; Horváth, János; Näätänen, Risto

    2010-01-01

    One important principle of object processing is exclusive allocation. Any part of the sensory input, including the border between two objects, can only belong to one object at a time. We tested whether tones forming a spectro-temporal border between two sound patterns can belong to both patterns at the same time. Sequences were composed of low-, intermediate- and high-pitched tones. Tones were delivered with short onset-to-onset intervals causing the high and low tones to automatically form separate low and high sound streams. The intermediate-pitch tones could be perceived as part of either one or the other stream, but not both streams at the same time. Thus these tones formed a pitch ’border’ between the two streams. The tones were presented in a fixed, cyclically repeating order. Linking the intermediate-pitch tones with the high or the low tones resulted in the perception of two different repeating tonal patterns. Participants were instructed to maintain perception of one of the two tone patterns throughout the stimulus sequences. Occasional changes violated either the selected or the alternative tone pattern, but not both at the same time. We found that only violations of the selected pattern elicited the mismatch negativity event-related potential, indicating that only this pattern was represented in the auditory system. This result suggests that individual sounds are processed as part of only one auditory pattern at a time. Thus tones forming a spectro-temporal border are exclusively assigned to one sound object at any given time, as are spatio-temporal borders in vision. PMID:16836636

  11. Object-Centered Knowledge Representation and Information Retrieval.

    ERIC Educational Resources Information Center

    Panyr, Jiri

    1996-01-01

    Discusses object-centered knowledge representation and information retrieval. Highlights include semantic networks; frames; predicative (declarative) and associative knowledge; cluster analysis; creation of subconcepts and superconcepts; automatic classification; hierarchies and pseudohierarchies; graph theory; term classification; clustering of…

  12. Software for Building Models of 3D Objects via the Internet

    NASA Technical Reports Server (NTRS)

    Schramer, Tim; Jensen, Jeff

    2003-01-01

    The Virtual EDF Builder (where EDF signifies Electronic Development Fixture) is a computer program that facilitates the use of the Internet for building and displaying digital models of three-dimensional (3D) objects that ordinarily comprise assemblies of solid models created previously by use of computer-aided-design (CAD) programs. The Virtual EDF Builder resides on a Unix-based server computer. It is used in conjunction with a commercially available Web-based plug-in viewer program that runs on a client computer. The Virtual EDF Builder acts as a translator between the viewer program and a database stored on the server. The translation function includes the provision of uniform resource locator (URL) links to other Web-based computer systems and databases. The Virtual EDF builder can be used in two ways: (1) If the client computer is Unix-based, then it can assemble a model locally; the computational load is transferred from the server to the client computer. (2) Alternatively, the server can be made to build the model, in which case the server bears the computational load and the results are downloaded to the client computer or workstation upon completion.

  13. Learning Warps Object Representations in the Ventral Temporal Cortex.

    PubMed

    Clarke, Alex; Pell, Philip J; Ranganath, Charan; Tyler, Lorraine K

    2016-07-01

    The human ventral temporal cortex (VTC) plays a critical role in object recognition. Although it is well established that visual experience shapes VTC object representations, the impact of semantic and contextual learning is unclear. In this study, we tracked changes in representations of novel visual objects that emerged after learning meaningful information about each object. Over multiple training sessions, participants learned to associate semantic features (e.g., "made of wood," "floats") and spatial contextual associations (e.g., "found in gardens") with novel objects. fMRI was used to examine VTC activity for objects before and after learning. Multivariate pattern similarity analyses revealed that, after learning, VTC activity patterns carried information about the learned contextual associations of the objects, such that objects with contextual associations exhibited higher pattern similarity after learning. Furthermore, these learning-induced increases in pattern information about contextual associations were correlated with reductions in pattern information about the object's visual features. In a second experiment, we validated that these contextual effects translated to real-life objects. Our findings demonstrate that visual object representations in VTC are shaped by the knowledge we have about objects and show that object representations can flexibly adapt as a consequence of learning with the changes related to the specific kind of newly acquired information. PMID:26967942

  14. A Template Engine for Parsing Objects from Textual Representations

    NASA Astrophysics Data System (ADS)

    Rajković, Milan; Stanković, Milena; Marković, Ivica

    2011-09-01

    Template engines are widely used for separation of business and presentation logic. They are commonly used in web applications for clean rendering of HTML pages. Another area of usage is message formatting in distributed applications where they transform objects to appropriate representations. This paper explores the possibility of using templates for a reverse process—for creating objects starting from their representations. We present the prototype of engine that we have developed, and describe benefits and drawbacks of this approach.

  15. An object-based methodology for knowledge representation in SGML

    SciTech Connect

    Kelsey, R.L.; Hartley, R.T.; Webster, R.B.

    1997-11-01

    An object-based methodology for knowledge representation and its Standard Generalized Markup Language (SGML) implementation is presented. The methodology includes class, perspective domain, and event constructs for representing knowledge within an object paradigm. The perspective construct allows for representation of knowledge from multiple and varying viewpoints. The event construct allows actual use of knowledge to be represented. The SGML implementation of the methodology facilitates usability, structured, yet flexible knowledge design, and sharing and reuse of knowledge class libraries.

  16. A multi-objective optimization framework to model 3D river and landscape evolution processes

    NASA Astrophysics Data System (ADS)

    Bizzi, Simone; Castelletti, Andrea; Cominola, Andrea; Mason, Emanuele; Paik, Kyungrock

    2013-04-01

    Water and sediment interactions shape hillslopes, regulate soil erosion and sedimentation, and organize river networks. Landscape evolution and river organization occur at various spatial and temporal scale and the understanding and modelling of them is highly complex. The idea of a least action principle governing river networks evolution has been proposed many times as a simpler approach among the ones existing in the literature. These theories assume that river networks, as observed in nature, self-organize and act on soil transportation in order to satisfy a particular "optimality" criterion. Accordingly, river and landscape weathering can be simulated by solving an optimization problem, where the choice of the criterion to be optimized becomes the initial assumption. The comparison between natural river networks and optimized ones verifies the correctness of this initial assumption. Yet, various criteria have been proposed in literature and there is no consensus on which is better able to explain river network features observed in nature like network branching and river bed profile: each one is able to reproduce some river features through simplified modelling of the natural processes, but it fails to characterize the whole complexity (3D and its dynamic) of the natural processes. Some of the criteria formulated in the literature partly conflict: the reason is that their formulation rely on mathematical and theoretical simplifications of the natural system that are suitable for specific spatial and temporal scale but fails to represent the whole processes characterizing landscape evolution. In an attempt to address some of these scientific questions, we tested the suitability of using a multi-objective optimization framework to describe river and landscape evolution in a 3D spatial domain. A synthetic landscape is used to this purpose. Multiple, alternative river network evolutions, corresponding to as many tradeoffs between the different and partly

  17. Superquadrics objects representation for robot manipulation

    NASA Astrophysics Data System (ADS)

    Silva, Eliana Costa e.; Costa, M. Fernanda; Erlhagen, Wolfram; Bicho, Estela

    2016-06-01

    Superquadric are mathematically quite simple and have the ability to obtain a variety of shapes using low order parameterization. Furthermore they present closed-form equations and therefore can be used in the formulation of robotic movement planning problems, in particular in obstacle-avoidance and grasping constraints. In this paper we explore the modeling of objects using superquadrics. The classical nonlinear optimization problem for fitting shapes is extended by adding nonlinear constraints. The numerical results obtained by two different optimization methods are presented and a comparison of the volume of the superquadrics to the volume of simple ellipsoids is made.

  18. Ionized Outflows in 3-D Insights from Herbig-Haro Objects and Applications to Nearby AGN

    NASA Technical Reports Server (NTRS)

    Cecil, Gerald

    1999-01-01

    HST shows that the gas distributions of these objects are complex and clump at the limit of resolution. HST spectra have lumpy emission-line profiles, indicating unresolved sub-structure. The advantages of 3D over slits on gas so distributed are: robust flux estimates of various dynamical systems projected along lines of sight, sensitivity to fainter spectral lines that are physical diagnostics (reddening-gas density, T, excitation mechanisms, abundances), and improved prospects for recovery of unobserved dimensions of phase-space. These advantages al- low more confident modeling for more profound inquiry into underlying dynamics. The main complication is the effort required to link multi- frequency datasets that optimally track the energy flow through various phases of the ISM. This tedium has limited the number of objects that have been thoroughly analyzed to the a priori most spectacular systems. For HHO'S, proper-motions constrain the ambient B-field, shock velocity, gas abundances, mass-loss rates, source duty-cycle, and tie-ins with molecular flows. If the shock speed, hence ionization fraction, is indeed small then the ionized gas is a significant part of the flow energetics. For AGN'S, nuclear beaming is a source of ionization ambiguity. Establishing the energetics of the outflow is critical to determining how the accretion disk loses its energy. CXO will provide new constraints (especially spectral) on AGN outflows, and STIS UV-spectroscopy is also constraining cloud properties (although limited by extinction). HHO's show some of the things that we will find around AGN'S. I illustrate these points with results from ground-based and HST programs being pursued with collaborators.

  19. New 3D thermal evolution model for icy bodies application to trans-Neptunian objects

    NASA Astrophysics Data System (ADS)

    Guilbert-Lepoutre, A.; Lasue, J.; Federico, C.; Coradini, A.; Orosei, R.; Rosenberg, E. D.

    2011-05-01

    Context. Thermal evolution models have been developed over the years to investigate the evolution of thermal properties based on the transfer of heat fluxes or transport of gas through a porous matrix, among others. Applications of such models to trans-Neptunian objects (TNOs) and Centaurs has shown that these bodies could be strongly differentiated from the point of view of chemistry (i.e. loss of most volatile ices), as well as from physics (e.g. melting of water ice), resulting in stratified internal structures with differentiated cores and potential pristine material close to the surface. In this context, some observational results, such as the detection of crystalline water ice or volatiles, remain puzzling. Aims: In this paper, we would like to present a new fully three-dimensional thermal evolution model. With this model, we aim to improve determination of the temperature distribution inside icy bodies such as TNOs by accounting for lateral heat fluxes, which have been proven to be important for accurate simulations. We also would like to be able to account for heterogeneous boundary conditions at the surface through various albedo properties, for example, that might induce different local temperature distributions. Methods: In a departure from published modeling approaches, the heat diffusion problem and its boundary conditions are represented in terms of real spherical harmonics, increasing the numerical efficiency by roughly an order of magnitude. We then compare this new model and another 3D model recently published to illustrate the advantages and limits of the new model. We try to put some constraints on the presence of crystalline water ice at the surface of TNOs. Results: The results obtained with this new model are in excellent agreement with results obtained by different groups with various models. Small TNOs could remain primitive unless they are formed quickly (less than 2 Myr) or are debris from the disruption of larger bodies. We find that, for

  20. Object Representations Maintain Attentional Control Settings across Space and Time

    ERIC Educational Resources Information Center

    Schreij, Daniel; Olivers, Christian N. L.

    2009-01-01

    Previous research has revealed that we create and maintain mental representations for perceived objects on the basis of their spatiotemporal continuity. An important question is what type of information can be maintained within these so-called object files. We provide evidence that object files retain specific attentional control settings for…

  1. Laser Transfer of Metals and Metal Alloys for Digital Microfabrication of 3D Objects.

    PubMed

    Zenou, Michael; Sa'ar, Amir; Kotler, Zvi

    2015-09-01

    3D copper logos printed on epoxy glass laminates are demonstrated. The structures are printed using laser transfer of molten metal microdroplets. The example in the image shows letters of 50 µm width, with each letter being taller than the last, from a height of 40 µm ('s') to 190 µm ('l'). The scanning microscopy image is taken at a tilt, and the topographic image was taken using interferometric 3D microscopy, to show the effective control of this technique. PMID:25966320

  2. The effect of object speed and direction on the performance of 3D speckle tracking using a 3D swept-volume ultrasound probe

    NASA Astrophysics Data System (ADS)

    Harris, Emma J.; Miller, Naomi R.; Bamber, Jeffrey C.; Symonds-Tayler, J. Richard N.; Evans, Philip M.

    2011-11-01

    Three-dimensional (3D) soft tissue tracking using 3D ultrasound is of interest for monitoring organ motion during therapy. Previously we demonstrated feature tracking of respiration-induced liver motion in vivo using a 3D swept-volume ultrasound probe. The aim of this study was to investigate how object speed affects the accuracy of tracking ultrasonic speckle in the absence of any structural information, which mimics the situation in homogenous tissue for motion in the azimuthal and elevational directions. For object motion prograde and retrograde to the sweep direction of the transducer, the spatial sampling frequency increases or decreases with object speed, respectively. We examined the effect object motion direction of the transducer on tracking accuracy. We imaged a homogenous ultrasound speckle phantom whilst moving the probe with linear motion at a speed of 0-35 mm s-1. Tracking accuracy and precision were investigated as a function of speed, depth and direction of motion for fixed displacements of 2 and 4 mm. For the azimuthal direction, accuracy was better than 0.1 and 0.15 mm for displacements of 2 and 4 mm, respectively. For a 2 mm displacement in the elevational direction, accuracy was better than 0.5 mm for most speeds. For 4 mm elevational displacement with retrograde motion, accuracy and precision reduced with speed and tracking failure was observed at speeds of greater than 14 mm s-1. Tracking failure was attributed to speckle de-correlation as a result of decreasing spatial sampling frequency with increasing speed of retrograde motion. For prograde motion, tracking failure was not observed. For inter-volume displacements greater than 2 mm, only prograde motion should be tracked which will decrease temporal resolution by a factor of 2. Tracking errors of the order of 0.5 mm for prograde motion in the elevational direction indicates that using the swept probe technology speckle tracking accuracy is currently too poor to track homogenous tissue over

  3. Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul

    2014-06-01

    3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.

  4. Visualizing 3D Objects from 2D Cross Sectional Images Displayed "In-Situ" versus "Ex-Situ"

    ERIC Educational Resources Information Center

    Wu, Bing; Klatzky, Roberta L.; Stetten, George

    2010-01-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to…

  5. Assessment of children's object-representations with the Rorschach.

    PubMed

    Tuber, S B

    1989-09-01

    The recent emphasis on object relations theory as an explanatory model for personality development has been paralleled in the psychological test literature by measures that assess the quality of object-representations. The author reviews one such measure--the Mutuality of Autonomy (MOA) scale--in its research applications with adults and children. He then extends the scale to the psychotherapy of children and suggests that Rorschach object-representation scores can be of heuristic value in understanding the treatment process. PMID:2790351

  6. ARE SURFACE PROPERTIES INTEGRATED INTO VISUO-HAPTIC OBJECT REPRESENTATIONS?

    PubMed Central

    Lacey, Simon; Hall, Jenelle; Sathian, K.

    2011-01-01

    Object recognition studies have almost exclusively involved vision, focusing on shape rather than surface properties such as color. Visual object representations are thought to integrate shape and color information because changing the color of studied objects impairs their subsequent recognition. However, little is known about integration of surface properties into visuo-haptic multisensory representations. Here, participants studied objects with distinct patterns of surface properties (color in Experiment 1, texture in Experiments 2 & 3) and had to discriminate between object shapes when color/texture schemes were altered in within-modal (visual and haptic) and cross-modal (visual study/haptic test and vice versa) conditions. In Experiment 1, color changes impaired within-modal visual recognition but had no effect on cross-modal recognition, suggesting that the multisensory representation is not influenced by modality-specific surface properties. In Experiment 2, texture changes impaired recognition in all conditions, suggesting that both unisensory and multisensory representations integrate modality-independent surface properties. However, the cross-modal impairment might have reflected either the texture change or a failure to form the multisensory representation. Experiment 3 attempted to distinguish between these possibilities by combining changes in texture with changes in orientation, taking advantage of the known view-independence of the multisensory representation, but the results were not conclusive owing to the overwhelming effect of texture change. The simplest account is that the multisensory representation integrates shape and modality-independent surface properties. However, more work is required to investigate this and the conditions under which multisensory integration of structural and surface properties occurs. PMID:20584193

  7. Saccade latency reveals episodic representation of object color.

    PubMed

    Gordon, Robert D

    2014-08-01

    While previous studies suggest that identity, but not color, plays a role in episodic object representation, such studies have typically used tasks in which only identity is relevant, raising the possibility that the results reflect task demands, rather than the general principles that underlie object representation. In the present study, participants viewed a preview display containing one (Experiments 1 and 2) or two (Experiment 3) letters, then viewed a target display containing a single letter, in either the same or a different location. Participants executed an immediate saccade to fixate the target; saccade latency served as the dependent variable. In all experiments, saccade latencies were longer to fixate a target appearing in its previewed location, consistent with a bias to attend to new objects rather than to objects for which episodic representations are being maintained in visual working memory. The results of Experiment 3 further demonstrate, however, that changing target color eliminates these latency differences. The results suggest that color and identity are part of episodic representation even when not task relevant and that examining biases in saccade execution may be a useful approach to studying episodic representation. PMID:24820158

  8. Saccade Latency Reveals Episodic Representation of Object Color

    PubMed Central

    Gordon, Robert D.

    2014-01-01

    While previous studies suggest that identity, but not color, plays a role in episodic object representation, such studies have typically used tasks in which only identity is relevant, raising the possibility that the results reflect task demands rather than the general principles that underlie object representation. In the present study, participants viewed a preview display containing one (Experiments 1 and 2) or two (Experiment 3) letters, then viewed a target display containing a single letter, in either the same or a different location. Participants executed an immediate saccade to fixate the target; saccade latency served as the dependent variable. In all experiments, saccade latencies were longer to fixate a target appearing in its previewed location, consistent with a bias to attend to new objects rather than to objects for which episodic representations are being maintained in visual working memory. The results of Experiment 3 further demonstrate, however, that changing target color eliminates these latency differences. The results suggest that color and identity are part of episodic representation even when not task relevant, and that examining biases in saccade execution may be a useful approach to studying episodic representation. PMID:24820158

  9. EM modelling of arbitrary shaped anisotropic dielectric objects using an efficient 3D leapfrog scheme on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Gansen, A.; Hachemi, M. El; Belouettar, S.; Hassan, O.; Morgan, K.

    2016-09-01

    The standard Yee algorithm is widely used in computational electromagnetics because of its simplicity and divergence free nature. A generalization of the classical Yee scheme to 3D unstructured meshes is adopted, based on the use of a Delaunay primal mesh and its high quality Voronoi dual. This allows the problem of accuracy losses, which are normally associated with the use of the standard Yee scheme and a staircased representation of curved material interfaces, to be circumvented. The 3D dual mesh leapfrog-scheme which is presented has the ability to model both electric and magnetic anisotropic lossy materials. This approach enables the modelling of problems, of current practical interest, involving structured composites and metamaterials.

  10. Fusion of Depth and Intensity Data for Three-Dimensional Object Representation and Recognition

    NASA Astrophysics Data System (ADS)

    Ramirez Cortes, Juan Manuel

    For humans, retinal images provide sufficient information for the complete understanding of three-dimensional shapes in a scene. The ultimate goal of computer vision is to develop an automated system able to reproduce some of the tasks performed in a natural way by human beings as recognition, classification, or analysis of the environment as basis for further decisions. At the first level, referred to as early computer vision, the task is to extract symbolic descriptive information in a scene from a variety of sensory data. The second level is concerned with classification, recognition, or decision systems and the related heuristics, that aid the processing of the available information. This research is concerned with a new approach to 3-D object representation and recognition using an interpolation scheme applied to the information from the fusion of range and intensity data. The range image acquisition uses a methodology based on a passive stereo-vision model originally developed to be used with a sequence of images. However, curved features, large disparities and noisy input images are some of the problems associated with real imagery, which need to be addressed prior to applying the matching techniques in the spatial frequency domain. Some of the above mentioned problems can only be solved by computationally intensive spatial domain algorithms. Regularization techniques are explored for surface recovery from sparse range data, and intensity images are incorporated in the final representation of the surface. As an important application, the problem of 3-D representation of retinal images for extraction of quantitative information is addressed. Range information is also combined with intensity data to provide a more accurate numerical description based on aspect graphs. This representation is used as input to a three-dimensional object recognition system. Such an approach results in an improved performance of 3-D object classifiers.

  11. Simulated and Real Sheet-of-Light 3D Object Scanning Using a-Si:H Thin Film PSD Arrays

    PubMed Central

    Contreras, Javier; Tornero, Josep; Ferreira, Isabel; Martins, Rodrigo; Gomes, Luis; Fortunato, Elvira

    2015-01-01

    A MATLAB/SIMULINK software simulation model (structure and component blocks) has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector) array concept technology before it is further expanded or developed. This simulation allows changing most of its parameters, such as the number of elements in the PSD array, the direction of vision, the viewing/scanning angle, the object rotation, translation, sample/scan/simulation time, etc. In addition, results show for the first time the possibility of scanning an object in 3D when using an a-Si:H thin film 128 PSD array sensor and hardware/software system. Moreover, this sensor technology is able to perform these scans and render 3D objects at high speeds and high resolutions when using a sheet-of-light laser within a triangulation platform. As shown by the simulation, a substantial enhancement in 3D object profile image quality and realism can be achieved by increasing the number of elements of the PSD array sensor as well as by achieving an optimal position response from the sensor since clearly the definition of the 3D object profile depends on the correct and accurate position response of each detector as well as on the size of the PSD array. PMID:26633403

  12. Simulated and Real Sheet-of-Light 3D Object Scanning Using a-Si:H Thin Film PSD Arrays.

    PubMed

    Contreras, Javier; Tornero, Josep; Ferreira, Isabel; Martins, Rodrigo; Gomes, Luis; Fortunato, Elvira

    2015-01-01

    A MATLAB/SIMULINK software simulation model (structure and component blocks) has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector) array concept technology before it is further expanded or developed. This simulation allows changing most of its parameters, such as the number of elements in the PSD array, the direction of vision, the viewing/scanning angle, the object rotation, translation, sample/scan/simulation time, etc. In addition, results show for the first time the possibility of scanning an object in 3D when using an a-Si:H thin film 128 PSD array sensor and hardware/software system. Moreover, this sensor technology is able to perform these scans and render 3D objects at high speeds and high resolutions when using a sheet-of-light laser within a triangulation platform. As shown by the simulation, a substantial enhancement in 3D object profile image quality and realism can be achieved by increasing the number of elements of the PSD array sensor as well as by achieving an optimal position response from the sensor since clearly the definition of the 3D object profile depends on the correct and accurate position response of each detector as well as on the size of the PSD array. PMID:26633403

  13. A Comparative Analysis between Active and Passive Techniques for Underwater 3D Reconstruction of Close-Range Objects

    PubMed Central

    Bianco, Gianfranco; Gallo, Alessandro; Bruno, Fabio; Muzzupappa, Maurizio

    2013-01-01

    In some application fields, such as underwater archaeology or marine biology, there is the need to collect three-dimensional, close-range data from objects that cannot be removed from their site. In particular, 3D imaging techniques are widely employed for close-range acquisitions in underwater environment. In this work we have compared in water two 3D imaging techniques based on active and passive approaches, respectively, and whole-field acquisition. The comparison is performed under poor visibility conditions, produced in the laboratory by suspending different quantities of clay in a water tank. For a fair comparison, a stereo configuration has been adopted for both the techniques, using the same setup, working distance, calibration, and objects. At the moment, the proposed setup is not suitable for real world applications, but it allowed us to conduct a preliminary analysis on the performances of the two techniques and to understand their capability to acquire 3D points in presence of turbidity. The performances have been evaluated in terms of accuracy and density of the acquired 3D points. Our results can be used as a reference for further comparisons in the analysis of other 3D techniques and algorithms. PMID:23966193

  14. A comparative analysis between active and passive techniques for underwater 3D reconstruction of close-range objects.

    PubMed

    Bianco, Gianfranco; Gallo, Alessandro; Bruno, Fabio; Muzzupappa, Maurizio

    2013-01-01

    In some application fields, such as underwater archaeology or marine biology, there is the need to collect three-dimensional, close-range data from objects that cannot be removed from their site. In particular, 3D imaging techniques are widely employed for close-range acquisitions in underwater environment. In this work we have compared in water two 3D imaging techniques based on active and passive approaches, respectively, and whole-field acquisition. The comparison is performed under poor visibility conditions, produced in the laboratory by suspending different quantities of clay in a water tank. For a fair comparison, a stereo configuration has been adopted for both the techniques, using the same setup, working distance, calibration, and objects. At the moment, the proposed setup is not suitable for real world applications, but it allowed us to conduct a preliminary analysis on the performances of the two techniques and to understand their capability to acquire 3D points in presence of turbidity. The performances have been evaluated in terms of accuracy and density of the acquired 3D points. Our results can be used as a reference for further comparisons in the analysis of other 3D techniques and algorithms. PMID:23966193

  15. On the Relations between Action Planning, Object Identification, and Motor Representations of Observed Actions and Objects

    ERIC Educational Resources Information Center

    Vainio, Lari; Symes, Ed; Ellis, Rob; Tucker, Mike; Ottoboni, Giovanni

    2008-01-01

    Recent evidence suggests that viewing a static prime object (a hand grasp), can activate action representations that affect the subsequent identification of graspable target objects. The present study explored whether stronger effects on target object identification would occur when the prime object (a hand grasp) was made more action-rich and…

  16. Development of 3D interactive visual objects using the Scripps Institution of Oceanography's Visualization Center

    NASA Astrophysics Data System (ADS)

    Kilb, D.; Reif, C.; Peach, C.; Keen, C. S.; Smith, B.; Mellors, R. J.

    2003-12-01

    Within the last year scientists and educators at the Scripps Institution of Oceanography (SIO), the Birch Aquarium at Scripps and San Diego State University have collaborated with education specialists to develop 3D interactive graphic teaching modules for use in the classroom and in teacher workshops at the SIO Visualization center (http://siovizcenter.ucsd.edu). The unique aspect of the SIO Visualization center is that the center is designed around a 120 degree curved Panoram floor-to-ceiling screen (8'6" by 28'4") that immerses viewers in a virtual environment. The center is powered by an SGI 3400 Onyx computer that is more powerful, by an order of magnitude in both speed and memory, than typical base systems currently used for education and outreach presentations. This technology allows us to display multiple 3D data layers (e.g., seismicity, high resolution topography, seismic reflectivity, draped interferometric synthetic aperture radar (InSAR) images, etc.) simultaneously, render them in 3D stereo, and take a virtual flight through the data as dictated on the spot by the user. This system can also render snapshots, images and movies that are too big for other systems, and then export smaller size end-products to more commonly used computer systems. Since early 2002, we have explored various ways to provide informal education and outreach focusing on current research presented directly by the researchers doing the work. The Center currently provides a centerpiece for instruction on southern California seismology for K-12 students and teachers for various Scripps education endeavors. Future plans are in place to use the Visualization Center at Scripps for extended K-12 and college educational programs. In particular, we will be identifying K-12 curriculum needs, assisting with teacher education, developing assessments of our programs and products, producing web-accessible teaching modules and facilitating the development of appropriate teaching tools to be

  17. Neural representations of unfamiliar objects are modulated by sensorimotor experience.

    PubMed

    Bellebaum, Christian; Tettamanti, Marco; Marchetta, Elisa; Della Rosa, Pasquale; Rizzo, Giovanna; Daum, Irene; Cappa, Stefano F

    2013-04-01

    Sensory/functional accounts of semantic memory organization emphasize that object representations in the brain reflect the modalities involved in object knowledge acquisition. The present study aimed to elucidate the impact of different types of object-related sensorimotor experience on the neural representations of novel objects. Sixteen subjects engaged in an object matching task while their brain activity was assessed with functional magnetic resonance imaging (fMRI), before and after they acquired knowledge about previously unfamiliar objects. In three training sessions subjects learned about object function, actively manipulating only one set of objects (manipulation training objects, MTO), and visually exploring a second set (visual training objects, VTO). A third object set served as control condition and was not part of the training (no training objects, NTO). While training-related activation increases were observed in the fronto-parietal cortex for both VTO and MTO, post training activity in the left inferior/middle frontal gyrus and the left posterior inferior parietal lobule was higher for MTO than VTO and NTO. As revealed by Dynamic Causal Modeling of effective connectivity between the regions with enhanced post training activity, these effects were likely caused, respectively, by a down-regulation of a fronto-parietal tool use network in response to VTO, and by an increased connectivity for MTO. This pattern of findings indicates that the modalities involved in sensorimotor experience influence the formation of neural representations of objects in semantic memory, with manipulation experience specifically yielding higher activity in regions of the fronto-parietal cortex. PMID:22608404

  18. Constructing Mental Representations of Complex Three-Dimensional Objects.

    ERIC Educational Resources Information Center

    Aust, Ronald

    This exploratory study investigated whether there are differences between males and females in the strategies used to construct mental representations from three-dimensional objects in a dimensional travel display. A Silicon Graphics IRIS computer was used to create the travel displays and mathematical models were created for each of the objects…

  19. Aversive learning modulates cortical representations of object categories.

    PubMed

    Dunsmoor, Joseph E; Kragel, Philip A; Martin, Alex; LaBar, Kevin S

    2014-11-01

    Experimental studies of conditioned learning reveal activity changes in the amygdala and unimodal sensory cortex underlying fear acquisition to simple stimuli. However, real-world fears typically involve complex stimuli represented at the category level. A consequence of category-level representations of threat is that aversive experiences with particular category members may lead one to infer that related exemplars likewise pose a threat, despite variations in physical form. Here, we examined the effect of category-level representations of threat on human brain activation using 2 superordinate categories (animals and tools) as conditioned stimuli. Hemodynamic activity in the amygdala and category-selective cortex was modulated by the reinforcement contingency, leading to widespread fear of different exemplars from the reinforced category. Multivariate representational similarity analyses revealed that activity patterns in the amygdala and object-selective cortex were more similar among exemplars from the threat versus safe category. Learning to fear animate objects was additionally characterized by enhanced functional coupling between the amygdala and fusiform gyrus. Finally, hippocampal activity co-varied with object typicality and amygdala activation early during training. These findings provide novel evidence that aversive learning can modulate category-level representations of object concepts, thereby enabling individuals to express fear to a range of related stimuli. PMID:23709642

  20. Flying triangulation - A motion-robust optical 3D sensor for the real-time shape acquisition of complex objects

    NASA Astrophysics Data System (ADS)

    Willomitzer, Florian; Ettl, Svenja; Arold, Oliver; Häusler, Gerd

    2013-05-01

    The three-dimensional shape acquisition of objects has become more and more important in the last years. Up to now, there are several well-established methods which already yield impressive results. However, even under quite common conditions like object movement or a complex shaping, most methods become unsatisfying. Thus, the 3D shape acquisition is still a difficult and non-trivial task. We present our measurement principle "Flying Triangulation" which enables a motion-robust 3D acquisition of complex-shaped object surfaces by a freely movable handheld sensor. Since "Flying Triangulation" is scalable, a whole sensor-zoo for different object sizes is presented. Concluding, an overview of current and future fields of investigation is given.

  1. Evaluation of iterative sparse object reconstruction from few projections for 3-D rotational coronary angiography.

    PubMed

    Hansis, Eberhard; Schäfer, Dirk; Dössel, Olaf; Grass, Michael

    2008-11-01

    A 3-D reconstruction of the coronary arteries offers great advantages in the diagnosis and treatment of cardiovascular disease, compared to 2-D X-ray angiograms. Besides improved roadmapping, quantitative vessel analysis is possible. Due to the heart's motion, rotational coronary angiography typically provides only 5-10 projections for the reconstruction of each cardiac phase, which leads to a strongly undersampled reconstruction problem. Such an ill-posed problem can be approached with regularized iterative methods. The coronary arteries cover only a small fraction of the reconstruction volume. Therefore, the minimization of the mbiL(1) norm of the reconstructed image, favoring spatially sparse images, is a suitable regularization. Additional problems are overlaid background structures and projection truncation, which can be alleviated by background reduction using a morphological top-hat filter. This paper quantitatively evaluates image reconstruction based on these ideas on software phantom data, in terms of reconstructed absorption coefficients and vessel radii. Results for different algorithms and different input data sets are compared. First results for electrocardiogram-gated reconstruction from clinical catheter-based rotational X-ray coronary angiography are presented. Excellent 3-D image quality can be achieved. PMID:18955171

  2. Eccentricity in Images of Circular and Spherical Targets and its Impact to 3D Object Reconstruction

    NASA Astrophysics Data System (ADS)

    Luhmann, T.

    2014-06-01

    This paper discusses a feature of projective geometry which causes eccentricity in the image measurement of circular and spherical targets. While it is commonly known that flat circular targets can have a significant displacement of the elliptical image centre with respect to the true imaged circle centre, it can also be shown that the a similar effect exists for spherical targets. Both types of targets are imaged with an elliptical contour. As a result, if measurement methods based on ellipses are used to detect the target (e.g. best-fit ellipses), the calculated ellipse centre does not correspond to the desired target centre in 3D space. This paper firstly discusses the use and measurement of circular and spherical targets. It then describes the geometrical projection model in order to demonstrate the eccentricity in image space. Based on numerical simulations, the eccentricity in the image is further quantified and investigated. Finally, the resulting effect in 3D space is estimated for stereo and multi-image intersections. It can be stated that the eccentricity is larger than usually assumed, and must be compensated for high-accuracy applications. Spherical targets do not show better results than circular targets. The paper is an updated version of Luhmann (2014) new experimental investigations on the effect of length measurement errors.

  3. Influence of georeference for saturated excess overland flow modelling using 3D volumetric soft geo-objects

    NASA Astrophysics Data System (ADS)

    Izham, Mohamad Yusoff; Muhamad Uznir, Ujang; Alias, Abdul Rahman; Ayob, Katimon; Wan Ruslan, Ismail

    2011-04-01

    Existing 2D data structures are often insufficient for analysing the dynamism of saturation excess overland flow (SEOF) within a basin. Moreover, all stream networks and soil surface structures in GIS must be preserved within appropriate projection plane fitting techniques known as georeferencing. Inclusion of 3D volumetric structure of the current soft geo-objects simulation model would offer a substantial effort towards representing 3D soft geo-objects of SEOF dynamically within a basin by visualising saturated flow and overland flow volume. This research attempts to visualise the influence of a georeference system towards the dynamism of overland flow coverage and total overland flow volume generated from the SEOF process using VSG data structure. The data structure is driven by Green-Ampt methods and the Topographic Wetness Index (TWI). VSGs are analysed by focusing on spatial object preservation techniques of the conformal-based Malaysian Rectified Skew Orthomorphic (MRSO) and the equidistant-based Cassini-Soldner projection plane under the existing geodetic Malaysian Revised Triangulation 1948 (MRT48) and the newly implemented Geocentric Datum for Malaysia (GDM2000) datum. The simulated result visualises deformation of SEOF coverage under different georeference systems via its projection planes, which delineate dissimilar computation of SEOF areas and overland flow volumes. The integration of Georeference, 3D GIS and the saturation excess mechanism provides unifying evidence towards successful landslide and flood disaster management through envisioning the streamflow generating process (mainly SEOF) in a 3D environment.

  4. 3D phase micro-object studies by means of digital holographic tomography supported by algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Bilski, B. J.; Jozwicka, A.; Kujawinska, M.

    2007-09-01

    Constant development of microelements' technology requires a creation of new instruments to determine their basic physical parameters in 3D. The most efficient non-destructive method providing 3D information is tomography. In this paper we present Digital Holographic Tomography (DHT), in which input data is provided by means of Di-git- al Holography (DH). The main advantage of DH is the capability to capture several projections with a single hologram [1]. However, these projections have uneven angular distribution and their number is significantly limited. Therefore - Algebraic Reconstruction Technique (ART), where a few phase projections may be sufficient for proper 3D phase reconstruction, is implemented. The error analysis of the method and its additional limitations due to shape and dimensions of investigated object are presented. Finally, the results of ART application to DHT method are also presented on data reconstructed from numerically generated hologram of a multimode fibre.

  5. Multi-view and 3D deformable part models.

    PubMed

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ). PMID:26440264

  6. 3D GeoWall Analysis System for Shuttle External Tank Foreign Object Debris Events

    NASA Technical Reports Server (NTRS)

    Brown, Richard; Navard, Andrew; Spruce, Joseph

    2010-01-01

    An analytical, advanced imaging method has been developed for the initial monitoring and identification of foam debris and similar anomalies that occur post-launch in reference to the space shuttle s external tank (ET). Remote sensing technologies have been used to perform image enhancement and analysis on high-resolution, true-color images collected with the DCS 760 Kodak digital camera located in the right umbilical well of the space shuttle. Improvements to the camera, using filters, have added sharpness/definition to the image sets; however, image review/analysis of the ET has been limited by the fact that the images acquired by umbilical cameras during launch are two-dimensional, and are usually nonreferenceable between frames due to rotation translation of the ET as it falls away from the space shuttle. Use of stereo pairs of these images can enable strong visual indicators that can immediately portray depth perception of damaged areas or movement of fragments between frames is not perceivable in two-dimensional images. A stereoscopic image visualization system has been developed to allow 3D depth perception of stereo-aligned image pairs taken from in-flight umbilical and handheld digital shuttle cameras. This new system has been developed to augment and optimize existing 2D monitoring capabilities. Using this system, candidate sequential image pairs are identified for transformation into stereo viewing pairs. Image orientation is corrected using control points (similar points) between frames to place the two images in proper X-Y viewing perspective. The images are then imported into the WallView stereo viewing software package. The collected control points are used to generate a transformation equation that is used to re-project one image and effectively co-register it to the other image. The co-registered, oriented image pairs are imported into a WallView image set and are used as a 3D stereo analysis slide show. Multiple sequential image pairs can be used

  7. A stroboscopic structured illumination system used in dynamic 3D visualization of high-speed motion object

    NASA Astrophysics Data System (ADS)

    Su, Xianyu; Zhang, Qican; Li, Yong; Xiang, Liqun; Cao, Yiping; Chen, Wenjing

    2005-04-01

    A stroboscopic structured illumination system, which can be used in measurement for 3D shape and deformation of high-speed motion object, is proposed and verified by experiments. The system, present in this paper, can automatically detect the position of high-speed moving object and synchronously control the flash of LED to project a structured optical field onto surface of motion object and the shoot of imaging system to acquire an image of deformed fringe pattern, also can create a signal, set artificially through software, to synchronously control the LED and imaging system to do their job. We experiment on a civil electric fan, successful acquire a serial of instantaneous, sharp and clear images of rotation blade and reconstruct its 3D shapes in difference revolutions.

  8. Teaching object concepts for XML-based representations.

    SciTech Connect

    Kelsey, R. L.

    2002-01-01

    Students learned about object-oriented design concepts and knowledge representation through the use of a set of toy blocks. The blocks represented a limited and focused domain of knowledge and one that was physical and tangible. The blocks helped the students to better visualize, communicate, and understand the domain of knowledge as well as how to perform object decomposition. The blocks were further abstracted to an engineering design kit for water park design. This helped the students to work on techniques for abstraction and conceptualization. It also led the project from tangible exercises into software and programming exercises. Students employed XML to create object-based knowledge representations and Java to use the represented knowledge. The students developed and implemented software allowing a lay user to design and create their own water slide and then to take a simulated ride on their slide.

  9. VIRO 3D: fast three-dimensional full-body scanning for humans and other living objects

    NASA Astrophysics Data System (ADS)

    Stein, Norbert; Minge, Bernhard

    1998-03-01

    The development of a family of partial and whole body scanners provides a complete technology for fully three-dimensional and contact-free scans on human bodies or other living objects within seconds. This paper gives insight into the design and the functional principles of the whole body scanner VIRO 3D operating on the basis of the laser split-beam method. The arrangement of up to 24 camera/laser combinations, thus dividing the area into different camera fields and an all- around sensor configuration travelling in vertical direction allow the complete 360-degree-scan of an object within 6 - 20 seconds. Due to a special calibration process the different sensors are matched and the measured data are combined. Up to 10 million 3D measuring points with a resolution of approximately 1 mm are processed in all coordinate axes to generate a 3D model. By means of high-performance processors in combination with real-time image processing chips the image data from almost any number of sensors can be recorded and evaluated synchronously in video real-time. VIRO 3D scanning systems have already been successfully implemented in various applications and will open up new perspectives in different other fields, ranging from industry, orthopaedic medicine, plastic surgery to art and photography.

  10. Accessing embodied object representations from vision: A review.

    PubMed

    Matheson, Heath; White, Nicole; McMullen, Patricia

    2015-05-01

    Theories of embodied cognition (EC) propose that object concepts are represented by reactivations of sensorimotor experiences of different objects. Abundant research from linguistic paradigms provides support for the notion that sensorimotor simulations are involved in cognitive tasks like comprehension. However, it is unclear whether object concepts, as accessed from the visual presentation of objects, are embodied. In the present article we review a large body of visual cognitive research that addresses 5 main predictions of the theory of EC. First, EC accounts predict that visual presentation of manipulable objects, but not nonmanipulable objects, should activate motor representations. Second, EC predicts that sensorimotor activity is necessary to perform visual-cognitive tasks such as object naming. Third, EC posits the existence of distinct neural ensembles that integrate information from action and vision. Fourth, EC predicts that relationships between visual and motor activity change throughout development. Fifth, EC predicts that the visual presentation of objects or actions should prime performance cross-modally. We summarize findings from neuroimaging, neuropsychology, neurophysiology, development, and behavioral paradigms. We show that while much of the research published so far demonstrates that there is a relationship between visual and motoric representations, there is no evidence supporting a strong form of EC. We conclude that sensorimotor simulations may not be required to perform visual cognitive tasks and highlight a number of directions for future research that could provide strong support for EC in visual cognitive paradigms. PMID:25314679

  11. Using Morphlet-Based Image Representation for Object Detection

    NASA Astrophysics Data System (ADS)

    Gorbatsevich, V. S.; Vizilter, Yu. V.

    2016-06-01

    In this paper, we propose an original method for objects detection based on a special tree-structured image representation - the trees of morphlets. The method provides robust detection of various types of objects in an image without employing a machine learning procedure. Along with a bounding box creation on a detection step, the method makes pre-segmentation, which can be further used for recognition purposes. Another important feature of the proposed approach is that there are no needs to use a running window as well as a features pyramid in order to detect the objects of different sizes.

  12. The spatiotopic representation of visual objects across time.

    PubMed

    Collins, Thérèse

    2016-08-01

    Each eye movement introduces changes in the retinal location of objects. How a stable spatiotopic representation emerges from such variable input is an important question for the study of vision. Researchers have classically probed human observers' performance in a task requiring a location judgment about an object presented at different locations across a saccade. Correct performance on this task requires realigning or remapping retinal locations to compensate for the saccade. A recent study showed that performance improved with longer presaccadic viewing time, suggesting that accurate spatiotopic representations take time to build up. The first goal of the study was to replicate that finding. Two experiments, one an exact replication and the second a modified version, failed to replicate improved performance with longer presaccadic viewing time. The second goal of this study was to examine the role of attention in constructing spatiotopic representations, as theoretical and neurophysiological accounts of remapping have proposed that only attended targets are remapped. A third experiment thus manipulated attention with a spatial cueing paradigm and compared transsaccadic location performance of attended versus unattended targets. No difference in spatiotopic performance was found between attended and unattended targets. Although only negative results are reported, they might nevertheless suggest that spatiotopic representations are relatively stable over time. PMID:27349426

  13. A fast 3-D object recognition algorithm for the vision system of a special-purpose dexterous manipulator

    NASA Technical Reports Server (NTRS)

    Hung, Stephen H. Y.

    1989-01-01

    A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.

  14. 3D scene's object detection and recognition using depth layers and SIFT-based machine learning

    NASA Astrophysics Data System (ADS)

    Kounalakis, T.; Triantafyllidis, G. A.

    2011-09-01

    This paper presents a novel system that is fusing efficient and state-of-the-art techniques of stereo vision and machine learning, aiming at object detection and recognition. To this goal, the system initially creates depth maps by employing the Graph-Cut technique. Then, the depth information is used for object detection by separating the objects from the whole scene. Next, the Scale-Invariant Feature Transform (SIFT) is used, providing the system with unique object's feature key-points, which are employed in training an Artificial Neural Network (ANN). The system is then able to classify and recognize the nature of these objects, creating knowledge from the real world. [Figure not available: see fulltext.

  15. Controlled Experimental Study Depicting Moving Objects in View-Shared Time-Resolved 3D MRA

    PubMed Central

    Mostardi, Petrice M.; Haider, Clifton R.; Rossman, Phillip J.; Borisch, Eric A.; Riederer, Stephen J.

    2010-01-01

    Various methods have been used for time-resolved contrast-enhanced MRA (CE-MRA), many involving view sharing. However, the extent to which the resultant image time series represents the actual dynamic behavior of the contrast bolus is not always clear. Although numerical simulations can be used to estimate performance, an experimental study can allow more realistic characterization. The purpose of this work was to use a computer-controlled motion phantom for study of the temporal fidelity of 3D time-resolved sequences in depicting a contrast bolus. It is hypothesized that the view order of the acquisition and the selection of views in the reconstruction can affect the positional accuracy and sharpness of the leading edge of the bolus and artifactual signal preceding the edge. Phantom studies were performed using dilute gadolinium-filled vials that were moved along tabletop tracks by a computer-controlled motor. Several view orders were tested, which use view-sharing and Cartesian sampling. Compactness of measuring the k-space center, consistency of view ordering within each reconstruction frame, and sampling the k-space center near the end of the temporal footprint were shown to be important in accurate portrayal of the leading edge of the bolus. A number of findings were confirmed in an in vivo CE-MRA study. PMID:19319897

  16. Controlled experimental study depicting moving objects in view-shared time-resolved 3D MRA.

    PubMed

    Mostardi, Petrice M; Haider, Clifton R; Rossman, Phillip J; Borisch, Eric A; Riederer, Stephen J

    2009-07-01

    Various methods have been used for time-resolved contrast-enhanced magnetic resonance angiography (CE-MRA), many involving view sharing. However, the extent to which the resultant image time series represents the actual dynamic behavior of the contrast bolus is not always clear. Although numerical simulations can be used to estimate performance, an experimental study can allow more realistic characterization. The purpose of this work was to use a computer-controlled motion phantom for study of the temporal fidelity of three-dimensional (3D) time-resolved sequences in depicting a contrast bolus. It is hypothesized that the view order of the acquisition and the selection of views in the reconstruction can affect the positional accuracy and sharpness of the leading edge of the bolus and artifactual signal preceding the edge. Phantom studies were performed using dilute gadolinium-filled vials that were moved along tabletop tracks by a computer-controlled motor. Several view orders were tested using view-sharing and Cartesian sampling. Compactness of measuring the k-space center, consistency of view ordering within each reconstruction frame, and sampling the k-space center near the end of the temporal footprint were shown to be important in accurate portrayal of the leading edge of the bolus. A number of findings were confirmed in an in vivo CE-MRA study. PMID:19319897

  17. Representation of protein 3D structures in spherical (ρ, ϕ, θ) coordinates and two of its potential applications.

    PubMed

    Reyes, Vicente M

    2011-09-01

    Three-dimensional objects can be represented using cartesian, spherical or cylindrical coordinate systems, among many others. Currently all protein 3D structures in the PDB are in cartesian coordinates. We wanted to explore the possibility that protein 3D structures, especially the globular type (spheroproteins), when represented in spherical coordinates might find useful novel applications. A Fortran program was written to transform protein 3D structure files in cartesian coordinates (x,y,z) to spherical coordinates (ρ, ϕ, θ), with the centroid of the protein molecule as origin. We present here two applications, namely, (1) separation of the protein outer layer (OL) from the inner core (IC); and (2) identifying protrusions and invaginations on the protein surface. In the first application, ϕ and θ were partitioned into suitable intervals and the point with maximum ρ in each such 'ϕ-θ bin' was determined. A suitable cutoff value for ρ is adopted, and for each ϕ-θ bin, all points with ρ values less than the cutoff are considered part of the IC, and those with ρ values equal to or greater than the cutoff are considered part of the OL. We show that this separation procedure is successful as it gives rise to an OL that is significantly more enriched in hydrophilic amino acid residues, and an IC that is significantly more enriched in hydrophobic amino acid residues, as expected. In the second application, the point with maximum ρ in each ϕ-θ bin are sequestered and their frequency distribution constructed (i.e., maximum ρ's sorted from lowest to highest, collected into 1.50Å-intervals, and the frequency in each interval plotted). We show in such plots that invaginations on the protein surface give rise to subpeaks or shoulders on the lagging side of the main peak, while protrusions give rise to similar subpeaks or shoulders, but on the leading side of the main peak. We used the dataset of Laskowski et al. (1996) to demonstrate both applications. PMID

  18. The effects of surface gloss and roughness on color constancy for real 3-D objects.

    PubMed

    Granzier, Jeroen J M; Vergne, Romain; Gegenfurtner, Karl R

    2014-01-01

    Color constancy denotes the phenomenon that the appearance of an object remains fairly stable under changes in illumination and background color. Most of what we know about color constancy comes from experiments using flat, matte surfaces placed on a single plane under diffuse illumination simulated on a computer monitor. Here we investigate whether material properties (glossiness and roughness) have an effect on color constancy for real objects. Subjects matched the color and brightness of cylinders (painted red, green, or blue) illuminated by simulated daylight (D65) or by a reddish light with a Munsell color book illuminated by a tungsten lamp. The cylinders were either glossy or matte and either smooth or rough. The object was placed in front of a black background or a colored checkerboard. We found that color constancy was significantly higher for the glossy objects compared to the matte objects, and higher for the smooth objects compared to the rough objects. This was independent of the background. We conclude that material properties like glossiness and roughness can have significant effects on color constancy. PMID:24563527

  19. Modeling 3-D objects with planar surfaces for prediction of electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Koch, M. B.; Beck, F. B.; Cockrell, C. R.

    1992-01-01

    Electromagnetic scattering analysis of objects at resonance is difficult because low frequency techniques are slow and computer intensive, and high frequency techniques may not be reliable. A new technique for predicting the electromagnetic backscatter from electrically conducting objects at resonance is studied. This technique is based on modeling three dimensional objects as a combination of flat plates where some of the plates are blocking the scattering from others. A cube is analyzed as a simple example. The preliminary results compare well with the Geometrical Theory of Diffraction and with measured data.

  20. Neural representations of novel objects associated with olfactory experience.

    PubMed

    Ghio, Marta; Schulze, Patrick; Suchan, Boris; Bellebaum, Christian

    2016-07-15

    Object conceptual knowledge comprises information related to several motor and sensory modalities (e.g. for tools, how they look like, how to manipulate them). Whether and to which extent conceptual object knowledge is represented in the same sensory and motor systems recruited during object-specific learning experience is still a controversial question. A direct approach to assess the experience-dependence of conceptual object representations is based on training with novel objects. The present study extended previous research, which focused mainly on the role of manipulation experience for tool-like stimuli, by considering sensory experience only. Specifically, we examined the impact of experience in the non-dominant olfactory modality on the neural representation of novel objects. Sixteen healthy participants visually explored a set of novel objects during the training phase while for each object an odor (e.g., peppermint) was presented (olfactory-visual training). As control conditions, a second set of objects was only visually explored (visual-only training), and a third set was not part of the training. In a post-training fMRI session, participants performed an old/new task with pictures of objects associated with olfactory-visual and visual-only training (old) and no training objects (new). Although we did not find any evidence of activations in primary olfactory areas, the processing of olfactory-visual versus visual-only training objects elicited greater activation in the right anterior hippocampus, a region included in the extended olfactory network. This finding is discussed in terms of different functional roles of the hippocampus in olfactory processes. PMID:27083305

  1. Human Object-Similarity Judgments Reflect and Transcend the Primate-IT Object Representation

    PubMed Central

    Mur, Marieke; Meys, Mirjam; Bodurka, Jerzy; Goebel, Rainer; Bandettini, Peter A.; Kriegeskorte, Nikolaus

    2013-01-01

    Primate inferior temporal (IT) cortex is thought to contain a high-level representation of objects at the interface between vision and semantics. This suggests that the perceived similarity of real-world objects might be predicted from the IT representation. Here we show that objects that elicit similar activity patterns in human IT (hIT) tend to be judged as similar by humans. The IT representation explained the human judgments better than early visual cortex, other ventral-stream regions, and a range of computational models. Human similarity judgments exhibited category clusters that reflected several categorical divisions that are prevalent in the IT representation of both human and monkey, including the animate/inanimate and the face/body division. Human judgments also reflected the within-category representation of IT. However, the judgments transcended the IT representation in that they introduced additional categorical divisions. In particular, human judgments emphasized human-related additional divisions between human and non-human animals and between man-made and natural objects. hIT was more similar to monkey IT than to human judgments. One interpretation is that IT has evolved visual-feature detectors that distinguish between animates and inanimates and between faces and bodies because these divisions are fundamental to survival and reproduction for all primate species, and that other brain systems serve to more flexibly introduce species-dependent and evolutionarily more recent divisions. PMID:23525516

  2. Automatic 3D object recognition and reconstruction based on neuro-fuzzy modelling

    NASA Astrophysics Data System (ADS)

    Samadzadegan, Farhad; Azizi, Ali; Hahn, Michael; Lucas, Curo

    Three-dimensional object recognition and reconstruction (ORR) is a research area of major interest in computer vision and photogrammetry. Virtual cities, for example, is one of the exciting application fields of ORR which became very popular during the last decade. Natural and man-made objects of cities such as trees and buildings are complex structures and automatic recognition and reconstruction of these objects from digital aerial images but also other data sources is a big challenge. In this paper a novel approach for object recognition is presented based on neuro-fuzzy modelling. Structural, textural and spectral information is extracted and integrated in a fuzzy reasoning process. The learning capability of neural networks is introduced to the fuzzy recognition process by taking adaptable parameter sets into account which leads to the neuro-fuzzy approach. Object reconstruction follows recognition seamlessly by using the recognition output and the descriptors which have been extracted for recognition. A first successful application of this new ORR approach is demonstrated for the three object classes 'buildings', 'cars' and 'trees' by using aerial colour images of an urban area of the town of Engen in Germany.

  3. Robust 3D object localization and pose estimation for random bin picking with the 3DMaMa algorithm

    NASA Astrophysics Data System (ADS)

    Skotheim, Øystein; Thielemann, Jens T.; Berge, Asbjørn; Sommerfelt, Arne

    2010-02-01

    Enabling robots to automatically locate and pick up randomly placed and oriented objects from a bin is an important challenge in factory automation, replacing tedious and heavy manual labor. A system should be able to recognize and locate objects with a predefined shape and estimate the position with the precision necessary for a gripping robot to pick it up. We describe a system that consists of a structured light instrument for capturing 3D data and a robust approach for object location and pose estimation. The method does not depend on segmentation of range images, but instead searches through pairs of 2D manifolds to localize candidates for object match. This leads to an algorithm that is not very sensitive to scene complexity or the number of objects in the scene. Furthermore, the strategy for candidate search is easily reconfigurable to arbitrary objects. Experiments reported in this paper show the utility of the method on a general random bin picking problem, in this paper exemplified by localization of car parts with random position and orientation. Full pose estimation is done in less than 380 ms per image. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  4. Reconstructing representations of dynamic visual objects in early visual cortex.

    PubMed

    Chong, Edmund; Familiar, Ariana M; Shim, Won Mok

    2016-02-01

    As raw sensory data are partial, our visual system extensively fills in missing details, creating enriched percepts based on incomplete bottom-up information. Despite evidence for internally generated representations at early stages of cortical processing, it is not known whether these representations include missing information of dynamically transforming objects. Long-range apparent motion (AM) provides a unique test case because objects in AM can undergo changes both in position and in features. Using fMRI and encoding methods, we found that the "intermediate" orientation of an apparently rotating grating, never presented in the retinal input but interpolated during AM, is reconstructed in population-level, feature-selective tuning responses in the region of early visual cortex (V1) that corresponds to the retinotopic location of the AM path. This neural representation is absent when AM inducers are presented simultaneously and when AM is visually imagined. Our results demonstrate dynamic filling-in in V1 for object features that are interpolated during kinetic transformations. PMID:26712004

  5. Reconstructing representations of dynamic visual objects in early visual cortex

    PubMed Central

    Chong, Edmund; Familiar, Ariana M.; Shim, Won Mok

    2016-01-01

    As raw sensory data are partial, our visual system extensively fills in missing details, creating enriched percepts based on incomplete bottom-up information. Despite evidence for internally generated representations at early stages of cortical processing, it is not known whether these representations include missing information of dynamically transforming objects. Long-range apparent motion (AM) provides a unique test case because objects in AM can undergo changes both in position and in features. Using fMRI and encoding methods, we found that the “intermediate” orientation of an apparently rotating grating, never presented in the retinal input but interpolated during AM, is reconstructed in population-level, feature-selective tuning responses in the region of early visual cortex (V1) that corresponds to the retinotopic location of the AM path. This neural representation is absent when AM inducers are presented simultaneously and when AM is visually imagined. Our results demonstrate dynamic filling-in in V1 for object features that are interpolated during kinetic transformations. PMID:26712004

  6. Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking

    PubMed Central

    Qu, Shiru

    2016-01-01

    Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness.

  7. Characteristics of Haptic Peripersonal Spatial Representation of Object Relations.

    PubMed

    Wako, Ryo; Ayabe-Kanamura, Saho

    2016-01-01

    Haptic perception of space is known to show characteristics that are different to actual space. The current study extends on this line of research, investigating whether systematic deviations are also observed in the formation of haptic spatial representations of object-to-object relations. We conducted a haptic spatial reproduction task analogous to the parallelity task with spatial layouts. Three magnets were positioned to form corners of an isosceles triangle and the task of the participant was to reproduce the right angle corner. Weobserved systematic deviations in the reproduction of the right angle triangle. The systematic deviations were not observed when the task was conducted on the mid-sagittal plane. Furthermore, the magnitude of the deviation was decreased when non-informative vision was introduced. These results suggest that there is a deformation in spatial representation of object-to-object relations formed using haptics. However, as no systematic deviation was observed when the task was conducted on the mid-saggital plane, we suggest that the perception of object-to-object relations use a different egocentric reference frame to the perception of orientation. PMID:27462990

  8. Characteristics of Haptic Peripersonal Spatial Representation of Object Relations

    PubMed Central

    2016-01-01

    Haptic perception of space is known to show characteristics that are different to actual space. The current study extends on this line of research, investigating whether systematic deviations are also observed in the formation of haptic spatial representations of object-to-object relations. We conducted a haptic spatial reproduction task analogous to the parallelity task with spatial layouts. Three magnets were positioned to form corners of an isosceles triangle and the task of the participant was to reproduce the right angle corner. Weobserved systematic deviations in the reproduction of the right angle triangle. The systematic deviations were not observed when the task was conducted on the mid-sagittal plane. Furthermore, the magnitude of the deviation was decreased when non-informative vision was introduced. These results suggest that there is a deformation in spatial representation of object-to-object relations formed using haptics. However, as no systematic deviation was observed when the task was conducted on the mid-saggital plane, we suggest that the perception of object-to-object relations use a different egocentric reference frame to the perception of orientation. PMID:27462990

  9. Evaluating the Effectiveness of Organic Chemistry Textbooks in Promoting Representational Fluency and Understanding of 2D-3D Diagrammatic Relationships

    ERIC Educational Resources Information Center

    Kumi, Bryna C.; Olimpo, Jeffrey T.; Bartlett, Felicia; Dixon, Bonnie L.

    2013-01-01

    The use of two-dimensional (2D) representations to communicate and reason about micromolecular phenomena is common practice in chemistry. While experts are adept at using such representations, research suggests that novices often exhibit great difficulty in understanding, manipulating, and translating between various representational forms. When…

  10. Detection of hidden objects using a real-time 3-D millimeter-wave imaging system

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon, Avihai; Levanon, Assaf; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, N. S.

    2014-10-01

    Millimeter (mm)and sub-mm wavelengths or terahertz (THz) band have several properties that motivate their use in imaging for security applications such as recognition of hidden objects, dangerous materials, aerosols, imaging through walls as in hostage situations, and also in bad weather conditions. There is no known ionization hazard for biological tissue, and atmospheric degradation of THz radiation is relatively low for practical imaging distances. We recently developed a new technology for the detection of THz radiation. This technology is based on very inexpensive plasma neon indicator lamps, also known as Glow Discharge Detector (GDD), that can be used as very sensitive THz radiation detectors. Using them, we designed and constructed a Focal Plane Array (FPA) and obtained recognizable2-dimensional THz images of both dielectric and metallic objects. Using THz wave it is shown here that even concealed weapons made of dielectric material can be detected. An example is an image of a knife concealed inside a leather bag and also under heavy clothing. Three-dimensional imaging using radar methods can enhance those images since it can allow the isolation of the concealed objects from the body and environmental clutter such as nearby furniture or other people. The GDDs enable direct heterodyning between the electric field of the target signal and the reference signal eliminating the requirement for expensive mixers, sources, and Low Noise Amplifiers (LNAs).We expanded the ability of the FPA so that we are able to obtain recognizable 2-dimensional THz images in real time. We show here that the THz detection of objects in three dimensions, using FMCW principles is also applicable in real time. This imaging system is also shown here to be capable of imaging objects from distances allowing standoff detection of suspicious objects and humans from large distances.

  11. Microwave and camera sensor fusion for the shape extraction of metallic 3D space objects

    NASA Technical Reports Server (NTRS)

    Shaw, Scott W.; Defigueiredo, Rui J. P.; Krishen, Kumar

    1989-01-01

    The vacuum of space presents special problems for optical image sensors. Metallic objects in this environment can produce intense specular reflections and deep shadows. By combining the polarized RCS with an incomplete camera image, it has become possible to better determine the shape of some simple three-dimensional objects. The radar data are used in an iterative procedure that generates successive approximations to the target shape by minimizing the error between computed scattering cross-sections and the observed radar returns. Favorable results have been obtained for simulations and experiments reconstructing plates, ellipsoids, and arbitrary surfaces.

  12. Data amalgamation in the digitalization of 3D objects all over its 360 degrees

    NASA Astrophysics Data System (ADS)

    Rayas, Juan A.; Rodriguez-Vera, Ramon; Martinez, Amalia

    2005-02-01

    It is described a technique where different views of an object are connected to recover its three-dimensional form in a field of vision of 360°. The object is placed on a rotary motorized platform and projected a linear fringe pattern. In each angular object displacement, the projected fringe pattern is captured by a camera CCD. Each pattern is digitally demodulated providing information of depth. The format of the digital matrix, this is, the image type, is changed for one of triads (x, y, z). This way, a cloud of independent points of their position in the matrix is constructed. As a reference, one point in each cloud (known it a priori), is taken. All the clouds are rotated and displaced until the reference point taking its corresponding position. Different mixed clouds of points (views) are ordered in a single triad matrix that describes the complete surface of the object surface target. Finally a mesh of quadrilaterals is built up that makes possible to generate a solid surface.

  13. Representation of haptic objects during mental rotation in congenital blindness.

    PubMed

    Güçlü, Burak; Celik, Serkan; Ilci, Civan

    2014-04-01

    The representation of haptic objects by three groups of participants (sighted, blindfolded, and congenitally blind) was studied in a mental-rotation task. Three models were tested. The participants explored a standard object continuously with the left hand and tried to find the mirror object among two alternatives explored sequentially with the right hand. Sighted participants were tested in the visual version of the task. The accuracy of judgments was very high (> 95%) for all groups, and the blind group had the highest identification times. Correlation analyses were performed between (both single-trial and average) identification times and angular differences. The identification times of the sighted and blindfolded groups increased as linear functions of the angular difference between the mirror and the standard stimuli, supporting the classical model. The identification times of the blind group changed non-monotonically and were consistent with an antiparallel image (180 degrees rotation superimposed) in the mental representation. The dual code model did not fit the data well for any participant group. The performance differences between the blindfolded and blind groups may be attributed to a modified mapping function from the object-properties-processing sub-system to the visual buffer, which was conjectured to be available also to the blind group while processing haptic objects. PMID:24897889

  14. A roadmap to global illumination in 3D scenes: solutions for GPU object recognition applications

    NASA Astrophysics Data System (ADS)

    Picos, Kenia; Díaz-Ramírez, Victor H.; Tapia, Juan J.

    2014-09-01

    Light interactions with matter is of remarkable complexity. An adequate modeling of global illumination is a vastly studied topic since the beginning of computer graphics, and still is an unsolved problem. The rendering equation for global illumination is based of refraction and reflection of light in interaction with matter within an environment. This physical process possesses a high computational complexity when implemented in a digital computer. The appearance of an object depends on light interactions with the surface of the material, such as emission, scattering, and absorption. Several image-synthesis methods have been used to realistically render the appearance of light incidence on an object. Recent global illumination algorithms employ mathematical models and computational strategies that improve the efficiency of the simulation solution. This work presents a review the state of the art of global illumination algorithms and focuses on the efficiency of the solution in a computational implementation in a graphics processing unit. A reliable system is developed to simulate realistics scenes in the context of real-time object recognition under different lighting conditions. Computer simulations results are presented and discussed in terms of discrimination capability, and robustness to additive noise, when considering several lighting model reflections and multiple light sources.

  15. Insertion of 3-D-primitives in mesh-based representations: towards compact models preserving the details.

    PubMed

    Lafarge, Florent; Keriven, Renaud; Brédif, Mathieu

    2010-07-01

    We propose an original hybrid modeling process of urban scenes that represents 3-D models as a combination of mesh-based surfaces and geometric 3-D-primitives. Meshes describe details such as ornaments and statues, whereas 3-D-primitives code for regular shapes such as walls and columns. Starting from an 3-D-surface obtained by multiview stereo techniques, these primitives are inserted into the surface after being detected. This strategy allows the introduction of semantic knowledge, the simplification of the modeling, and even correction of errors generated by the acquisition process. We design a hierarchical approach exploring different scales of an observed scene. Each level consists first in segmenting the surface using a multilabel energy model optimized by -expansion and then in fitting 3-D-primitives such as planes, cylinders or tori on the obtained partition where relevant. Experiments on real meshes, depth maps and synthetic surfaces show good potential for the proposed approach. PMID:20236893

  16. Calibration and 3D reconstruction of underwater objects with non-single-view projection model by structured light stereo imaging.

    PubMed

    Wang, Yexin; Negahdaripour, Shahriar; Aykin, Murat D

    2016-08-20

    Establishing the projection model of imaging systems is critical in 3D reconstruction of object shapes from multiple 2D views. When deployed underwater, these are enclosed in waterproof housings with transparent glass ports that generate nonlinear refractions of optical rays at interfaces, leading to invalidation of the commonly assumed single-viewpoint (SVP) model. In this paper, we propose a non-SVP ray tracing model for the calibration of a projector-camera system, employed for 3D reconstruction based on the structured light paradigm. The projector utilizes dot patterns, having established that the contrast loss is less severe than for traditional stripe patterns in highly turbid waters. Experimental results are presented to assess the achieved calibrating accuracy. PMID:27556973

  17. In-hand dexterous manipulation of piecewise-smooth 3-D objects

    SciTech Connect

    Rus, D.

    1999-04-01

    The author presents an algorithm called finger tracking for in-hand manipulation of three-dimensional objects with independent robot fingers. She describes and analyzes the differential control for finger tracking and extends it to on-line continuous control for a set of cooperating robot fingers. She shows experimental data from a simulation. Finally, she discusses global control issues for finger tracking, and computes lower bounds for reorientation by finger tracking. The algorithm is computationally efficient, exact, and takes into consideration the full dynamics of the system.

  18. Representational dynamics of object recognition: Feedforward and feedback information flows.

    PubMed

    Goddard, Erin; Carlson, Thomas A; Dermody, Nadene; Woolgar, Alexandra

    2016-03-01

    Object perception involves a range of visual and cognitive processes, and is known to include both a feedfoward flow of information from early visual cortical areas to higher cortical areas, along with feedback from areas such as prefrontal cortex. Previous studies have found that low and high spatial frequency information regarding object identity may be processed over different timescales. Here we used the high temporal resolution of magnetoencephalography (MEG) combined with multivariate pattern analysis to measure information specifically related to object identity in peri-frontal and peri-occipital areas. Using stimuli closely matched in their low-level visual content, we found that activity in peri-occipital cortex could be used to decode object identity from ~80ms post stimulus onset, and activity in peri-frontal cortex could also be used to decode object identity from a later time (~265ms post stimulus onset). Low spatial frequency information related to object identity was present in the MEG signal at an earlier time than high spatial frequency information for peri-occipital cortex, but not for peri-frontal cortex. We additionally used Granger causality analysis to compare feedforward and feedback influences on representational content, and found evidence of both an early feedfoward flow and later feedback flow of information related to object identity. We discuss our findings in relation to existing theories of object processing and propose how the methods we use here could be used to address further questions of the neural substrates underlying object perception. PMID:26806290

  19. Typicality sharpens category representations in object-selective cortex.

    PubMed

    Iordan, Marius Cătălin; Greene, Michelle R; Beck, Diane M; Fei-Fei, Li

    2016-07-01

    The purpose of categorization is to identify generalizable classes of objects whose members can be treated equivalently. Within a category, however, some exemplars are more representative of that concept than others. Despite long-standing behavioral effects, little is known about how typicality influences the neural representation of real-world objects from the same category. Using fMRI, we showed participants 64 subordinate object categories (exemplars) grouped into 8 basic categories. Typicality for each exemplar was assessed behaviorally and we used several multi-voxel pattern analyses to characterize how typicality affects the pattern of responses elicited in early visual and object-selective areas: V1, V2, V3v, hV4, LOC. We found that in LOC, but not in early areas, typical exemplars elicited activity more similar to the central category tendency and created sharper category boundaries than less typical exemplars, suggesting that typicality enhances within-category similarity and between-category dissimilarity. Additionally, we uncovered a brain region (cIPL) where category boundaries favor less typical categories. Our results suggest that typicality may constitute a previously unexplored principle of organization for intra-category neural structure and, furthermore, that this representation is not directly reflected in image features describing natural input, but rather built by the visual system at an intermediate processing stage. PMID:27079531

  20. A supervised method for object-based 3D building change detection on aerial stereo images

    NASA Astrophysics Data System (ADS)

    Qin, R.; Gruen, A.

    2014-08-01

    There is a great demand for studying the changes of buildings over time. The current trend for building change detection combines the orthophoto and DSM (Digital Surface Models). The pixel-based change detection methods are very sensitive to the quality of the images and DSMs, while the object-based methods are more robust towards these problems. In this paper, we propose a supervised method for building change detection. After a segment-based SVM (Support Vector Machine) classification with features extracted from the orthophoto and DSM, we focus on the detection of the building changes of different periods by measuring their height and texture differences, as well as their shapes. A decision tree analysis is used to assess the probability of change for each building segment and the traffic lighting system is used to indicate the status "change", "non-change" and "uncertain change" for building segments. The proposed method is applied to scanned aerial photos of the city of Zurich in 2002 and 2007, and the results have demonstrated that our method is able to achieve high detection accuracy.

  1. Calculations of Arctic ozone chemistry using objectively analyzed data in a 3-D CTM

    NASA Technical Reports Server (NTRS)

    Kaminski, J. W.; Mcconnell, J. C.; Sandilands, J. W.

    1994-01-01

    A three-dimensional chemical transport model (CTM) (Kaminski, 1992) has been used to study the evolution of the Arctic ozone during the winter of 1992. The continuity equation has been solved using a spectral method with Rhomboidal 15 (R15) truncation and leap-frog time stepping. Six-hourly meteorological fields from the Canadian Meteorological Center global objective analysis routines run at T79 were degraded to the model resolution. In addition, they were interpolated to the model time grid and were used to drive the model from the surface to 10 mb. In the model, processing of Cl(x) occurred over Arctic latitudes but some of the initial products were still present by mid-January. Also, the large amounts of ClO formed in the model in early January were converted to ClNO3. The results suggest that the model resolution may be insufficient to resolve the details of the Arctic transport during this time period. In particular, the wind field does not move the ClO(x) 'cloud' to the south over Europe as seen in the MLS measurements.

  2. Mechanisms underlying the emergence of object representations during infancy.

    PubMed

    Scott, Lisa S

    2011-10-01

    The effects of individual versus category training, using behavioral indices of stimulus discrimination and neural ERPs indices of holistic processing, were examined in infants. Following pretraining assessments at 6 months, infants were sent home with training books of objects for 3 months. One group of infants was trained with six different strollers labeled individually, and another group was trained with the same six strollers labeled at the category level (i.e., "stroller"). Infants returned for posttraining assessments at 9 months. Discrimination of objects was facilitated for infants trained with the individually labeled strollers but was unchanged after training at the category level. Relative to pretraining and to category-level training, individual-level training resulted in increased holistic processing of strollers recorded over occipital brain regions. These results suggest that labeling nonface objects individually, in infancy, facilitates discrimination and leads to the emergence of holistic neural representations not present with category-level labeling. PMID:21452953

  3. Feature diagnosticity affects representations of novel and familiar objects

    PubMed Central

    Hsu, Nina S.; Schlichting, Margaret L.; Thompson-Schill, Sharon L.

    2014-01-01

    Many features can describe a concept, but only some features define a concept in that they enable discrimination of items that are instances of a concept from (similar) items that are not. We refer to this property of some features as feature diagnosticity. Previous work has described the behavioral effects of feature diagnosticity, but there has been little work on explaining why and how these effects arise. In this study, we aimed to understand the impact of feature diagnosticity on concept representations across two complementary experiments. In Experiment 1, we manipulated the diagnosticity of one feature, color, for a set of novel objects that human subjects learned over the course of one week. We report behavioral and neural evidence that diagnostic features are likely to be automatically recruited during remembering. Specifically, individuals activated color-selective regions of ventral temporal cortex (specifically, left fusiform gyrus and left inferior temporal gyrus) when thinking about the novel objects, even though color information was never explicitly probed during the task. Moreover, multiple behavioral and neural measures of the effects of feature diagnosticity were correlated across subjects. In Experiment 2, we examined relative color association in familiar object categories, which varied in feature diagnosticity (fruits and vegetables, household items). Taken together, these results offer novel insights into the neural mechanisms underlying concept representations by demonstrating that automatic recruitment of diagnostic information gives rise to behavioral effects of feature diagnosticity. PMID:24800630

  4. An object-based methodology for knowledge representation

    SciTech Connect

    Kelsey, R.L.; Hartley, R.T.; Webster, R.B.

    1997-11-01

    An object based methodology for knowledge representation is presented. The constructs and notation to the methodology are described and illustrated with examples. The ``blocks world,`` a classic artificial intelligence problem, is used to illustrate some of the features of the methodology including perspectives and events. Representing knowledge with perspectives can enrich the detail of the knowledge and facilitate potential lines of reasoning. Events allow example uses of the knowledge to be represented along with the contained knowledge. Other features include the extensibility and maintainability of knowledge represented in the methodology.

  5. Reference Frames and 3-D Shape Perception of Pictured Objects: On Verticality and Viewpoint-From-Above

    PubMed Central

    van Doorn, Andrea J.; Wagemans, Johan

    2016-01-01

    Research on the influence of reference frames has generally focused on visual phenomena such as the oblique effect, the subjective visual vertical, the perceptual upright, and ambiguous figures. Another line of research concerns mental rotation studies in which participants had to discriminate between familiar or previously seen 2-D figures or pictures of 3-D objects and their rotated versions. In the present study, we disentangled the influence of the environmental and the viewer-centered reference frame, as classically done, by comparing the performances obtained in various picture and participant orientations. However, this time, the performance is the pictorial relief: the probed 3-D shape percept of the depicted object reconstructed from the local attitude settings of the participant. Comparisons between the pictorial reliefs based on different picture and participant orientations led to two major findings. First, in general, the pictorial reliefs were highly similar if the orientation of the depicted object was vertical with regard to the environmental or the viewer-centered reference frame. Second, a viewpoint-from-above interpretation could almost completely account for the shears occurring between the pictorial reliefs. More specifically, the shears could largely be considered as combinations of slants generated from the viewpoint-from-above, which was determined by the environmental as well as by the viewer-centered reference frame. PMID:27433329

  6. Spun-wrapped aligned nanofiber (SWAN) lithography for fabrication of micro/nano-structures on 3D objects.

    PubMed

    Ye, Zhou; Nain, Amrinder S; Behkam, Bahareh

    2016-07-01

    Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for fabrication of multiscale (nano to microscale) structures on 3D objects without restriction on substrate material and geometry. SWAN lithography combines precise deposition of polymeric nanofiber masks, in aligned single or multilayer configurations, with well-controlled solvent vapor treatment and etching processes to enable high throughput (>10(-7) m(2) s(-1)) and large-area fabrication of sub-50 nm to several micron features with high pattern fidelity. Using this technique, we demonstrate whole-surface nanopatterning of bulk and thin film surfaces of cubes, cylinders, and hyperbola-shaped objects that would be difficult, if not impossible to achieve with existing methods. We demonstrate that the fabricated feature size (b) scales with the fiber mask diameter (D) as b(1.5)∝D. This scaling law is in excellent agreement with theoretical predictions using the Johnson, Kendall, and Roberts (JKR) contact theory, thus providing a rational design framework for fabrication of systems and devices that require precisely designed multiscale features. PMID:27283144

  7. Parallel phase-shifting digital holography and its application to high-speed 3D imaging of dynamic object

    NASA Astrophysics Data System (ADS)

    Awatsuji, Yasuhiro; Xia, Peng; Wang, Yexin; Matoba, Osamu

    2016-03-01

    Digital holography is a technique of 3D measurement of object. The technique uses an image sensor to record the interference fringe image containing the complex amplitude of object, and numerically reconstructs the complex amplitude by computer. Parallel phase-shifting digital holography is capable of accurate 3D measurement of dynamic object. This is because this technique can reconstruct the complex amplitude of object, on which the undesired images are not superimposed, form a single hologram. The undesired images are the non-diffraction wave and the conjugate image which are associated with holography. In parallel phase-shifting digital holography, a hologram, whose phase of the reference wave is spatially and periodically shifted every other pixel, is recorded to obtain complex amplitude of object by single-shot exposure. The recorded hologram is decomposed into multiple holograms required for phase-shifting digital holography. The complex amplitude of the object is free from the undesired images is reconstructed from the multiple holograms. To validate parallel phase-shifting digital holography, a high-speed parallel phase-shifting digital holography system was constructed. The system consists of a Mach-Zehnder interferometer, a continuous-wave laser, and a high-speed polarization imaging camera. Phase motion picture of dynamic air flow sprayed from a nozzle was recorded at 180,000 frames per second (FPS) have been recorded by the system. Also phase motion picture of dynamic air induced by discharge between two electrodes has been recorded at 1,000,000 FPS, when high voltage was applied between the electrodes.

  8. Workflows and the Role of Images for Virtual 3d Reconstruction of no Longer Extant Historic Objects

    NASA Astrophysics Data System (ADS)

    Münster, S.

    2013-07-01

    3D reconstruction technologies have gained importance as tools for the research and visualization of no longer extant historic objects during the last decade. Within such reconstruction processes, visual media assumes several important roles: as the most important sources especially for a reconstruction of no longer extant objects, as a tool for communication and cooperation within the production process, as well as for a communication and visualization of results. While there are many discourses about theoretical issues of depiction as sources and as visualization outcomes of such projects, there is no systematic research about the importance of depiction during a 3D reconstruction process and based on empirical findings. Moreover, from a methodological perspective, it would be necessary to understand which role visual media plays during the production process and how it is affected by disciplinary boundaries and challenges specific to historic topics. Research includes an analysis of published work and case studies investigating reconstruction projects. This study uses methods taken from social sciences to gain a grounded view of how production processes would take place in practice and which functions and roles images would play within them. For the investigation of these topics, a content analysis of 452 conference proceedings and journal articles related to 3D reconstruction modeling in the field of humanities has been completed. Most of the projects described in those publications dealt with data acquisition and model building for existing objects. Only a small number of projects focused on structures that no longer or never existed physically. Especially that type of project seems to be interesting for a study of the importance of pictures as sources and as tools for interdisciplinary cooperation during the production process. In the course of the examination the authors of this paper applied a qualitative content analysis for a sample of 26 previously

  9. Object representations at multiple scales from digital elevation models.

    PubMed

    Drăguţ, Lucian; Eisank, Clemens

    2011-06-15

    In the last decade landform classification and mapping has developed as one of the most active areas of geomorphometry. However, translation from continuous models of elevation and its derivatives (slope, aspect, and curvatures) to landform divisions (landforms and landform elements) is filtered by two important concepts: scale and object ontology. Although acknowledged as being important, these two issues have received surprisingly little attention.This contribution provides an overview and prospects of object representation from DEMs as a function of scale. Relationships between object delineation and classification or regionalization are explored, in the context of differences between general and specific geomorphometry. A review of scales issues in geomorphometry-ranging from scale effects to scale optimization techniques-is followed by an analysis of pros and cons of using cells and objects in DEM analysis. Prospects for coupling multi-scale analysis and object delineation are then discussed. Within this context, we propose discrete geomorphometry as a possible approach between general and specific geomorphometry. Discrete geomorphometry would apply to and describe land-surface divisions defined solely by the criteria of homogeneity in respect to a given land-surface parameter or a combination of several parameters. Homogeneity, in its turn, should always be relative to scale. PMID:21760655

  10. Object representations at multiple scales from digital elevation models

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens

    2011-01-01

    In the last decade landform classification and mapping has developed as one of the most active areas of geomorphometry. However, translation from continuous models of elevation and its derivatives (slope, aspect, and curvatures) to landform divisions (landforms and landform elements) is filtered by two important concepts: scale and object ontology. Although acknowledged as being important, these two issues have received surprisingly little attention. This contribution provides an overview and prospects of object representation from DEMs as a function of scale. Relationships between object delineation and classification or regionalization are explored, in the context of differences between general and specific geomorphometry. A review of scales issues in geomorphometry—ranging from scale effects to scale optimization techniques—is followed by an analysis of pros and cons of using cells and objects in DEM analysis. Prospects for coupling multi-scale analysis and object delineation are then discussed. Within this context, we propose discrete geomorphometry as a possible approach between general and specific geomorphometry. Discrete geomorphometry would apply to and describe land-surface divisions defined solely by the criteria of homogeneity in respect to a given land-surface parameter or a combination of several parameters. Homogeneity, in its turn, should always be relative to scale. PMID:21760655

  11. Object representation and magnetic moments in thin alkali films

    NASA Astrophysics Data System (ADS)

    Garrett, Douglas C.

    2008-10-01

    This thesis is broken into two parts a computer vision part and a solid state physics part. In the computer vision part of the thesis (chapters 1 through 5), the concept of an architecture is discussed with a review of what is known about the brain's visual architecture as it applies to object representation. With this in mind we review the two main types of architectures that are used in computer vision for object representation. A specific object representation is then implemented and optimized to solve a problem in object tracking. This representation is then used to derive the fiducial points of a face using two distinct methods. One using evolutionary algorithms and another by a Bayesian analysis of the feature responses drawn from a gallery of faces. The evolved fiducial representation is tested as a facial detection system. It is shown that the Bayesian analysis of facial images gives an entropy measure that can be used to further improve detection results in the facial detection system. In addition, two similarity metrics are explored in the context of facial detection. It is found that a normalized vector dot product substantially outperforms the Euclidean distance measure. The solid state part of the thesis is composed of two self contained chapters. An effort has been made to reduce the redundancies between the material but some will necessarily remain (i.e., short descriptions of the experimental setup). Both chapters deal with the phenomenon of magnetism of atomic impurities in and on thin metal host films. The important difference between the chapters, besides the results, lies in the experimental technique used to measure the magnetism. In chapter 6, thin films of Pb are covered in situ with sub monolayers of V, Mo and Co in the range between 0.01 and 1 monolayers. If the surface impurities are magnetic they will reduce the superconducting transition temperature of the Pb film. From the reduction of Tc the magnetic dephasing rate of the surface

  12. Multi-frequency color-marked fringe projection profilometry for fast 3D shape measurement of complex objects.

    PubMed

    Jiang, Chao; Jia, Shuhai; Dong, Jun; Bao, Qingchen; Yang, Jia; Lian, Qin; Li, Dichen

    2015-09-21

    We propose a novel multi-frequency color-marked fringe projection profilometry approach to measure the 3D shape of objects with depth discontinuities. A digital micromirror device projector is used to project a color map consisting of a series of different-frequency color-marked fringe patterns onto the target object. We use a chromaticity curve to calculate the color change caused by the height of the object. The related algorithm to measure the height is also described in this paper. To improve the measurement accuracy, a chromaticity curve correction method is presented. This correction method greatly reduces the influence of color fluctuations and measurement error on the chromaticity curve and the calculation of the object height. The simulation and experimental results validate the utility of our method. Our method avoids the conventional phase shifting and unwrapping process, as well as the independent calculation of the object height required by existing techniques. Thus, it can be used to measure complex and dynamic objects with depth discontinuities. These advantages are particularly promising for industrial applications. PMID:26406621

  13. Colorful holographic display of 3D object based on scaled diffraction by using non-uniform fast Fourier transform

    NASA Astrophysics Data System (ADS)

    Chang, Chenliang; Xia, Jun; Lei, Wei

    2015-03-01

    We proposed a new method to calculate the color computer generated hologram of three-dimensional object in holographic display. The three-dimensional object is composed of several tilted planes which are tilted from the hologram. The diffraction from each tilted plane to the hologram plane is calculated based on the coordinate rotation in Fourier spectrum domains. We used the nonuniform fast Fourier transformation (NUFFT) to calculate the nonuniform sampled Fourier spectrum on the tilted plane after coordinate rotation. By using the NUFFT, the diffraction calculation from tilted plane to the hologram plane with variable sampling rates can be achieved, which overcomes the sampling restriction of FFT in the conventional angular spectrum based method. The holograms of red, green and blue component of the polygon-based object are calculated separately by using our NUFFT based method. Then the color hologram is synthesized by placing the red, green and blue component hologram in sequence. The chromatic aberration caused by the wavelength difference can be solved effectively by restricting the sampling rate of the object in the calculation of each wavelength component. The computer simulation shows the feasibility of our method in calculating the color hologram of polygon-based object. The 3D object can be displayed in color with adjustable size and no chromatic aberration in holographic display system, which can be considered as an important application in the colorful holographic three-dimensional display.

  14. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands.

    PubMed

    Mateo, Carlos M; Gil, Pablo; Torres, Fernando

    2016-01-01

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object's surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand's fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments. PMID:27164102

  15. The role of action representations in thematic object relations.

    PubMed

    Tsagkaridis, Konstantinos; Watson, Christine E; Jax, Steven A; Buxbaum, Laurel J

    2014-01-01

    A number of studies have explored the role of associative/event-based (thematic) and categorical (taxonomic) relations in the organization of object representations. Recent evidence suggests that thematic information may be particularly important in determining relationships between manipulable artifacts. However, although sensorimotor information is on many accounts an important component of manipulable artifact representations, little is known about the role that action may play during the processing of semantic relationships (particularly thematic relationships) between multiple objects. In this study, we assessed healthy and left hemisphere stroke participants to explore three questions relevant to object relationship processing. First, we assessed whether participants tended to favor thematic relations including action (Th+A, e.g., wine bottle-corkscrew), thematic relationships without action (Th-A, e.g., wine bottle-cheese), or taxonomic relationships (Tax, e.g., wine bottle-water bottle) when choosing between them in an association judgment task with manipulable artifacts. Second, we assessed whether the underlying constructs of event relatedness, action relatedness, and categorical relatedness determined the choices that participants made. Third, we assessed the hypothesis that degraded action knowledge and/or damage to temporo-parietal cortex, a region of the brain associated with the representation of action knowledge, would reduce the influence of action on the choice task. Experiment 1 showed that explicit ratings of event, action, and categorical relatedness were differentially predictive of healthy participants' choices, with action relatedness determining choices between Th+A and Th-A associations above and beyond event and categorical ratings. Experiment 2 focused more specifically on these Th+A vs. Th-A choices and demonstrated that participants with left temporo-parietal lesions, a brain region known to be involved in sensorimotor processing, were

  16. Object identification leads to a conceptual broadening of object representations in lateral prefrontal cortex.

    PubMed

    Gotts, Stephen J; Milleville, Shawn C; Martin, Alex

    2015-09-01

    Recent experience identifying objects leads to later improvements in both speed and accuracy ("repetition priming"), along with simultaneous reductions of neural activity ("repetition suppression"). A popular interpretation of these joint behavioral and neural phenomena is that object representations become perceptually "sharper" with stimulus repetition, eliminating cells that are poorly stimulus-selective and responsive and reducing support for competing representations downstream. Here, we test this hypothesis in an fMRI-adaptation experiment using pictures of objects. Prior to fMRI, participants repeatedly named a set of object pictures. During fMRI, participants viewed adaptation sequences composed of rapidly repeated objects (3-6 repetitions over several seconds) that were either named previously or that were new for the fMRI session, followed by single "deviant" object pictures used to measure recovery from adaptation and that shared a relationship to the adapted picture (a different exemplar of the same object, a conceptual associate, or an unrelated picture). Effects of adaptation and recovery were found throughout visually responsive brain regions. Occipitotemporal cortical regions displayed repetition suppression to previously named relative to new adapters but failed to exhibit pronounced changes in neural tuning. In contrast, changes in the slope of the recovery curves were found in the left lateral prefrontal cortex: Greater residual adaptation was observed to exemplar stimuli and conceptual associates following previously named adapting stimuli, consistent with greater rather than reduced neural overlap among representations of conceptually related objects. Furthermore, this change in neural tuning was directly related to the proportion of conceptual errors made by participants in the naming sessions pre- and post-fMRI, establishing that the experience-dependent conceptual broadening of object representations seen in fMRI is also manifest in behavior

  17. Esophagogastric Junction pressure morphology: comparison between a station pull-through and real-time 3D-HRM representation

    PubMed Central

    Nicodème, Frédéric; Lin, Zhiyue; Pandolfino, John E.; Kahrilas, Peter J.

    2013-01-01

    BACKGROUND Esophagogastric junction (EGJ) competence is the fundamental defense against reflux making it of great clinical significance. However, characterizing EGJ competence with conventional manometric methodologies has been confounded by its anatomic and physiological complexity. Recent technological advances in miniaturization and electronics have led to the development of a novel device that may overcome these challenges. METHODS Nine volunteer subjects were studied with a novel 3D-HRM device providing 7.5 mm axial and 45° radial pressure resolution within the EGJ. Real-time measurements were made at rest and compared to simulations of a conventional pull-through made with the same device. Moreover, 3D-HRM recordings were analyzed to differentiate contributing pressure signals within the EGJ attributable to lower esophageal sphincter (LES), diaphragm, and vasculature. RESULTS 3D-HRM recordings suggested that sphincter length assessed by a pull-through method greatly exaggerated the estimate of LES length by failing to discriminate among circumferential contractile pressure and asymmetric extrinsic pressure signals attributable to diaphragmatic and vascular structures. Real-time 3D EGJ recordings found that the dominant constituents of EGJ pressure at rest were attributable to the diaphragm. CONCLUSIONS 3D-HRM permits real-time recording of EGJ pressure morphology facilitating analysis of the EGJ constituents responsible for its function as a reflux barrier making it a promising tool in the study of GERD pathophysiology. The enhanced axial and radial recording resolution of the device should facilitate further studies to explore perturbations in the physiological constituents of EGJ pressure in health and disease. PMID:23734788

  18. Spun-wrapped aligned nanofiber (SWAN) lithography for fabrication of micro/nano-structures on 3D objects

    NASA Astrophysics Data System (ADS)

    Ye, Zhou; Nain, Amrinder S.; Behkam, Bahareh

    2016-06-01

    Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for fabrication of multiscale (nano to microscale) structures on 3D objects without restriction on substrate material and geometry. SWAN lithography combines precise deposition of polymeric nanofiber masks, in aligned single or multilayer configurations, with well-controlled solvent vapor treatment and etching processes to enable high throughput (>10-7 m2 s-1) and large-area fabrication of sub-50 nm to several micron features with high pattern fidelity. Using this technique, we demonstrate whole-surface nanopatterning of bulk and thin film surfaces of cubes, cylinders, and hyperbola-shaped objects that would be difficult, if not impossible to achieve with existing methods. We demonstrate that the fabricated feature size (b) scales with the fiber mask diameter (D) as b1.5 ~ D. This scaling law is in excellent agreement with theoretical predictions using the Johnson, Kendall, and Roberts (JKR) contact theory, thus providing a rational design framework for fabrication of systems and devices that require precisely designed multiscale features.Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for

  19. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands

    PubMed Central

    Mateo, Carlos M.; Gil, Pablo; Torres, Fernando

    2016-01-01

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments. PMID

  20. The Representation of Object Distance: Evidence from Neuroimaging and Neuropsychology

    PubMed Central

    Berryhill, Marian E.; Olson, Ingrid R.

    2009-01-01

    Perceived distance in two-dimensional (2D) images relies on monocular distance cues. Here, we examined the representation of perceived object distance using a continuous carry-over adaptation design for fMRI. The task was to look at photographs of objects and make a judgment as to whether or not the item belonged in the kitchen. Importantly, this task was orthogonal to the variable of interest: the object's perceived distance from the viewer. In Experiment 1, whole brain group analyses identified bilateral clusters in the superior occipital gyrus (approximately area V3/V3A) that showed parametric adaptation to relative changes in perceived distance. In Experiment 2, retinotopic analyses confirmed that area V3A/B reflected the greatest magnitude of response to monocular changes in perceived distance. In Experiment 3, we report that the functional activations overlap with the occipito-parietal lesions in a patient with impaired distance perception, showing that the same regions monitor implied (2D) and actual (three-dimensional) distance. These data suggest that distance information is automatically processed even when it is task-irrelevant and that this process relies on superior occipital areas in and around area V3A. PMID:19949468

  1. FastScript3D - A Companion to Java 3D

    NASA Technical Reports Server (NTRS)

    Koenig, Patti

    2005-01-01

    FastScript3D is a computer program, written in the Java 3D(TM) programming language, that establishes an alternative language that helps users who lack expertise in Java 3D to use Java 3D for constructing three-dimensional (3D)-appearing graphics. The FastScript3D language provides a set of simple, intuitive, one-line text-string commands for creating, controlling, and animating 3D models. The first word in a string is the name of a command; the rest of the string contains the data arguments for the command. The commands can also be used as an aid to learning Java 3D. Developers can extend the language by adding custom text-string commands. The commands can define new 3D objects or load representations of 3D objects from files in formats compatible with such other software systems as X3D. The text strings can be easily integrated into other languages. FastScript3D facilitates communication between scripting languages [which enable programming of hyper-text markup language (HTML) documents to interact with users] and Java 3D. The FastScript3D language can be extended and customized on both the scripting side and the Java 3D side.

  2. Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution

    PubMed Central

    Meddens, Marjolein B. M.; Liu, Sheng; Finnegan, Patrick S.; Edwards, Thayne L.; James, Conrad D.; Lidke, Keith A.

    2016-01-01

    We have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single molecule super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet. PMID:27375939

  3. Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution

    DOE PAGESBeta

    Meddens, Marjolein B. M.; Liu, Sheng; Finnegan, Patrick S.; Edwards, Thayne L.; James, Conrad D.; Lidke, Keith A.

    2016-05-01

    Here, we have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single moleculemore » super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet.« less

  4. Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution.

    PubMed

    Meddens, Marjolein B M; Liu, Sheng; Finnegan, Patrick S; Edwards, Thayne L; James, Conrad D; Lidke, Keith A

    2016-06-01

    We have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single molecule super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet. PMID:27375939

  5. Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution

    SciTech Connect

    Meddens, Marjolein B. M.; Liu, Sheng; Finnegan, Patrick S.; Edwards, Thayne L.; James, Conrad D.; Lidke, Keith A.

    2016-01-01

    Here, we have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single molecule super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet.

  6. Tilt scanning interferometry: a 3D k-space representation for depth-resolved structure and displacement measurement in scattering materials

    NASA Astrophysics Data System (ADS)

    Galizzi, Gustavo E.; Coupland, Jeremy M.; Ruiz, Pablo D.

    2010-09-01

    Tilt Scanning Interferometry (TSI) has been recently developed as an experimental method to measure multi-component displacement fields inside the volume of semitransparent scattering materials. It can be considered as an extension of speckle interferometry in 3D, in which the illumination angle is tilted to provide depth information, or as an optical diffraction tomography technique with phase detection. It relies on phase measurements to extract the displacement information, as in the usual 2D counterparts. A numerical model to simulate the speckle fields recorded in TSI has been recently developed to enable the study on how the phase and amplitude are affected by factors such as refraction, absorption, scattering, dispersion, stress-optic coupling and spatial variations of the refractive index, all of which may lead to spurious displacements. In order to extract depth-resolved structure and phase information from TSI data, the approach had been to use Fourier Transformation of the intensity modulation signal along the illumination angle axis. However, it turns out that a more complete description of the imaging properties of the system for tomographic optical diffraction can be achieved using a 3D representation of the transfer function in k-space. According to this formalism, TSI is presented as a linear filtering operation. In this paper we describe the transfer function of TSI in 3D k-space, evaluate the 3D point spread function and present simulated results.

  7. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  8. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  9. Robust visual tracking of infrared object via sparse representation model

    NASA Astrophysics Data System (ADS)

    Ma, Junkai; Liu, Haibo; Chang, Zheng; Hui, Bin

    2014-11-01

    In this paper, we propose a robust tracking method for infrared object. We introduce the appearance model and the sparse representation in the framework of particle filter to achieve this goal. Representing every candidate image patch as a linear combination of bases in the subspace which is spanned by the target templates is the mechanism behind this method. The natural property, that if the candidate image patch is the target so the coefficient vector must be sparse, can ensure our algorithm successfully. Firstly, the target must be indicated manually in the first frame of the video, then construct the dictionary using the appearance model of the target templates. Secondly, the candidate image patches are selected in following frames and the sparse coefficient vectors of them are calculated via l1-norm minimization algorithm. According to the sparse coefficient vectors the right candidates is determined as the target. Finally, the target templates update dynamically to cope with appearance change in the tracking process. This paper also addresses the problem of scale changing and the rotation of the target occurring in tracking. Theoretic analysis and experimental results show that the proposed algorithm is effective and robust.

  10. 3D shape and eccentricity measurements of fast rotating rough objects by two mutually tilted interference fringe systems

    NASA Astrophysics Data System (ADS)

    Czarske, J. W.; Kuschmierz, R.; Günther, P.

    2013-06-01

    Precise measurements of distance, eccentricity and 3D-shape of fast moving objects such as turning parts of lathes, gear shafts, magnetic bearings, camshafts, crankshafts and rotors of vacuum pumps are on the one hand important tasks. On the other hand they are big challenges, since contactless precise measurement techniques are required. Optical techniques are well suitable for distance measurements of non-moving surfaces. However, measurements of laterally fast moving surfaces are still challenging. For such tasks the laser Doppler distance sensor technique was invented by the TU Dresden some years ago. This technique has been realized by two mutually tilted interference fringe systems, where the distance is coded in the phase difference between the generated interference signals. However, due to the speckle effect different random envelopes and phase jumps of the interference signals occur. They disturb the phase difference estimation between the interference signals. In this paper, we will report on a scientific breakthrough on the measurement uncertainty budget which has been achieved recently. Via matching of the illumination and receiving optics the measurement uncertainty of the displacement and distance can be reduced by about one magnitude. For displacement measurements of a recurring rough surface a standard deviation of 110 nm were attained at lateral velocities of 5 m / s. Due to the additionally measured lateral velocity and the rotational speed, the two-dimensional shape of rotating objects is calculated. The three-dimensional shape can be conducted by employment of a line camera. Since the measurement uncertainty of the displacement, vibration, distance, eccentricity, and shape is nearly independent of the lateral surface velocity, this technique is predestined for fast-rotating objects. Especially it can be advantageously used for the quality control of workpieces inside of a lathe towards the reduction of process tolerances, installation times and

  11. Declining object recognition performance in semantic dementia: A case for stored visual object representations.

    PubMed

    Tree, Jeremy J; Playfoot, David

    2015-01-01

    The role of the semantic system in recognizing objects is a matter of debate. Connectionist theories argue that it is impossible for a participant to determine that an object is familiar to them without recourse to a semantic hub; localist theories state that accessing a stored representation of the visual features of the object is sufficient for recognition. We examine this issue through the longitudinal study of two cases of semantic dementia, a neurodegenerative disorder characterized by a progressive degradation of the semantic system. The cases in this paper do not conform to the "common" pattern of object recognition performance in semantic dementia described by Rogers, T. T., Lambon Ralph, M. A., Hodges, J. R., & Patterson, K. (2004). Natural selection: The impact of semantic impairment on lexical and object decision. Cognitive Neuropsychology, 21, 331-352., and show no systematic relationship between severity of semantic impairment and success in object decision. We argue that these data are inconsistent with the connectionist position but can be easily reconciled with localist theories that propose stored structural descriptions of objects outside of the semantic system. PMID:27355607

  12. The Appropriateness of the Helical Axis Technique and Six Available Cardan Sequences for the Representation of 3-D Lead Leg Kinematics During the Fencing Lunge

    PubMed Central

    Sinclair, Jonathan; Taylor, Paul J; Bottoms, Lindsay

    Cardan/Euler angles represent the most common technique for the quantification of segmental rotations. Cardan angles are influenced by their ordered sequence, and sensitive to planar-cross talk from the dominant rotation plane, which may affect the angular parameters. The International Society of Biomechanics (ISB) currently recommends a sagittal, coronal, and then transverse (XYZ) ordered sequence, although it has been proposed that when quantifying non-sagittal rotations this may not be the most appropriate technique. This study examined the influence of the helical and six available Cardan sequences on lower extremity three-dimensional (3-D) kinematics of the lead leg during the fencing lunge. Kinematic data were obtained using a 3-D motion capture system as participants completed simulated lunges. Repeated measures ANOVAs were used to compare discrete kinematic parameters, and intraclass correlations were also utilized to determine evidence of planar crosstalk. The results indicate that in all three planes of rotation, peak angle and range of motion angles using the YXZ and ZXY sequences were significantly greater than the other sequences. It was also noted that the utilization of the YXZ and ZXY sequences was associated with the strongest correlations from the sagittal plane, and the XYZ sequence was found habitually to be associated with the lowest correlations. It appears that for accurate representation of 3-D kinematics of the lead leg during the fencing lunge, the XYZ sequence is the most appropriate and as such its continued utilization is encouraged. PMID:24146700

  13. Object-constrained meshless deformable algorithm for high speed 3D nonrigid registration between CT and CBCT

    SciTech Connect

    Chen Ting; Kim, Sung; Goyal, Sharad; Jabbour, Salma; Zhou Jinghao; Rajagopal, Gunaretnum; Haffty, Bruce; Yue Ning

    2010-01-15

    Purpose: High-speed nonrigid registration between the planning CT and the treatment CBCT data is critical for real time image guided radiotherapy (IGRT) to improve the dose distribution and to reduce the toxicity to adjacent organs. The authors propose a new fully automatic 3D registration framework that integrates object-based global and seed constraints with the grayscale-based ''demons'' algorithm. Methods: Clinical objects were segmented on the planning CT images and were utilized as meshless deformable models during the nonrigid registration process. The meshless models reinforced a global constraint in addition to the grayscale difference between CT and CBCT in order to maintain the shape and the volume of geometrically complex 3D objects during the registration. To expedite the registration process, the framework was stratified into hierarchies, and the authors used a frequency domain formulation to diffuse the displacement between the reference and the target in each hierarchy. Also during the registration of pelvis images, they replaced the air region inside the rectum with estimated pixel values from the surrounding rectal wall and introduced an additional seed constraint to robustly track and match the seeds implanted into the prostate. The proposed registration framework and algorithm were evaluated on 15 real prostate cancer patients. For each patient, prostate gland, seminal vesicle, bladder, and rectum were first segmented by a radiation oncologist on planning CT images for radiotherapy planning purpose. The same radiation oncologist also manually delineated the tumor volumes and critical anatomical structures in the corresponding CBCT images acquired at treatment. These delineated structures on the CBCT were only used as the ground truth for the quantitative validation, while structures on the planning CT were used both as the input to the registration method and the ground truth in validation. By registering the planning CT to the CBCT, a

  14. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  15. Application of a roughness-length representation to parameterize energy loss in 3-D numerical simulations of large rivers

    NASA Astrophysics Data System (ADS)

    Sandbach, S. D.; Lane, S. N.; Hardy, R. J.; Amsler, M. L.; Ashworth, P. J.; Best, J. L.; Nicholas, A. P.; Orfeo, O.; Parsons, D. R.; Reesink, A. J. H.; Szupiany, R. N.

    2012-12-01

    Recent technological advances in remote sensing have enabled investigation of the morphodynamics and hydrodynamics of large rivers. However, measuring topography and flow in these very large rivers is time consuming and thus often constrains the spatial resolution and reach-length scales that can be monitored. Similar constraints exist for computational fluid dynamics (CFD) studies of large rivers, requiring maximization of mesh- or grid-cell dimensions and implying a reduction in the representation of bedform-roughness elements that are of the order of a model grid cell or less, even if they are represented in available topographic data. These "subgrid" elements must be parameterized, and this paper applies and considers the impact of roughness-length treatments that include the effect of bed roughness due to "unmeasured" topography. CFD predictions were found to be sensitive to the roughness-length specification. Model optimization was based on acoustic Doppler current profiler measurements and estimates of the water surface slope for a variety of roughness lengths. This proved difficult as the metrics used to assess optimal model performance diverged due to the effects of large bedforms that are not well parameterized in roughness-length treatments. However, the general spatial flow patterns are effectively predicted by the model. Changes in roughness length were shown to have a major impact upon flow routing at the channel scale. The results also indicate an absence of secondary flow circulation cells in the reached studied, and suggest simpler two-dimensional models may have great utility in the investigation of flow within large rivers.

  16. A fisher vector representation of GPR data for detecting buried objects

    NASA Astrophysics Data System (ADS)

    Karem, Andrew; Khalifa, Amine B.; Frigui, Hichem

    2016-05-01

    We present a new method, based on the Fisher Vector (FV), for detecting buried explosive objects using ground- penetrating radar (GPR) data. First, low-level dense SIFT features are extracted from a grid covering a region of interest (ROIs). ROIs are identified as regions with high-energy along the (down-track, depth) dimensions of the 3-D GPR cube, or with high-energy along the (cross-track, depth) dimensions. Next, we model the training data (in the SIFT feature space) by a mixture of Gaussian components. Then, we construct FV descriptors based on the Fisher Kernel. The Fisher Kernel characterizes low-level features from an ROI by their deviation from a generative model. The deviation is the gradient of the ROI log-likelihood with respect to the generative model parameters. The vectorial representation of all the deviations is called the Fisher Vector. FV is a generalization of the standard Bag of Words (BoW) method, which provides a framework to map a set of local descriptors to a global feature vector. It is more efficient to compute than the BoW since it relies on a significantly smaller codebook. In addition, mapping a GPR signature into one global feature vector using this technique makes it more efficient to classify using simple and fast linear classifiers such as Support Vector Machines. The proposed approach is applied to detect buried explosive objects using GPR data. The selected data were accumulated across multiple dates and multiple test sites by a vehicle mounted mine detector (VMMD) using GPR sensor. This data consist of a diverse set of conventional landmines and other buried explosive objects consisting of varying shapes, metal content, and burial depths. The performance of the proposed approach is analyzed using receiver operating characteristics (ROC) and is compared to other state-of-the-art feature representation methods.

  17. Structural properties of spatial representations in blind people: Scanning images constructed from haptic exploration or from locomotion in a 3-D audio virtual environment.

    PubMed

    Afonso, Amandine; Blum, Alan; Katz, Brian F G; Tarroux, Philippe; Borst, Grégoire; Denis, Michel

    2010-07-01

    When people scan mental images, their response times increase linearly with increases in the distance to be scanned, which is generally taken as reflecting the fact that their internal representations incorporate the metric properties of the corresponding objects. In view of this finding, we investigated the structural properties of spatial mental images created from nonvisual sources in three groups (blindfolded sighted, late blind, and congenitally blind). In Experiment 1, blindfolded sighted and late blind participants created metrically accurate spatial representations of a small-scale spatial configuration under both verbal and haptic learning conditions. In Experiment 2, late and congenitally blind participants generated accurate spatial mental images after both verbal and locomotor learning of a full-scale navigable space (created by an immersive audio virtual reality system), whereas blindfolded sighted participants were selectively impaired in their ability to generate precise spatial representations from locomotor experience. These results attest that in the context of a permanent lack of sight, encoding spatial information on the basis of the most reliable currently functional system (the sensorimotor system) is crucial for building a metrically accurate representation of a spatial environment. The results also highlight the potential of spatialized audio-rendering technology for exploring the spatial representations of visually impaired participants. PMID:20551339

  18. Image-based 3D modeling for the knowledge and the representation of archaeological dig and pottery: Sant'Omobono and Sarno project's strategies

    NASA Astrophysics Data System (ADS)

    Gianolio, S.; Mermati, F.; Genovese, G.

    2014-06-01

    This paper presents a "standard" method that is being developed by ARESlab of Rome's La Sapienza University for the documentation and the representation of the archaeological artifacts and structures through automatic photogrammetry software. The image-based 3D modeling technique was applied in two projects: in Sarno and in Rome. The first is a small city in Campania region along Via Popilia, known as the ancient way from Capua to Rhegion. The interest in this city is based on the recovery of over 2100 tombs from local necropolis that contained more than 100.000 artifacts collected in "Museo Nazionale Archeologico della Valle del Sarno". In Rome the project regards the archaeological area of Insula Volusiana placed in Forum Boarium close to Sant'Omobono sacred area. During the studies photographs were taken by Canon EOS 5D Mark II and Canon EOS 600D cameras. 3D model and meshes were created in Photoscan software. The TOF-CW Z+F IMAGER® 5006h laser scanner is used to dense data collection of archaeological area of Rome and to make a metric comparison between range-based and image-based techniques. In these projects the IBM as a low-cost technique proved to be a high accuracy improvement if planned correctly and it shown also how it helps to obtain a relief of complex strata and architectures compared to traditional manual documentation methods (e.g. two-dimensional drawings). The multidimensional recording can be used for future studies of the archaeological heritage, especially for the "destructive" character of an excavation. The presented methodology is suitable for the 3D registration and the accuracy of the methodology improved also the scientific value.

  19. Thermodynamic depth of causal states: Objective complexity via minimal representations

    SciTech Connect

    Crutchfield, J.P. |; Shalizi, C.R. |

    1999-01-01

    Thermodynamic depth is an appealing but flawed structural complexity measure. It depends on a set of macroscopic states for a system, but neither its original introduction by Lloyd and Pagels nor any follow-up work has considered how to select these states. Depth, therefore, is at root arbitrary. Computational mechanics, an alternative approach to structural complexity, provides a definition for a system{close_quote}s minimal, necessary causal states and a procedure for finding them. We show that the rate of increase in thermodynamic depth, or {ital dive}, is the system{close_quote}s reverse-time Shannon entropy rate, and so depth only measures degrees of macroscopic randomness, not structure. To fix this, we redefine the depth in terms of the causal state representation{emdash}{epsilon}-machines{emdash}and show that this representation gives the minimum dive consistent with accurate prediction. Thus, {epsilon}-machines are optimally shallow. {copyright} {ital 1999} {ital The American Physical Society}

  20. A model for calculating the errors of 2D bulk analysis relative to the true 3D bulk composition of an object, with application to chondrules

    NASA Astrophysics Data System (ADS)

    Hezel, Dominik C.

    2007-09-01

    Certain problems in Geosciences require knowledge of the chemical bulk composition of objects, such as, for example, minerals or lithic clasts. This 3D bulk chemical composition (bcc) is often difficult to obtain, but if the object is prepared as a thin or thick polished section a 2D bcc can be easily determined using, for example, an electron microprobe. The 2D bcc contains an error relative to the true 3D bcc that is unknown. Here I present a computer program that calculates this error, which is represented as the standard deviation of the 2D bcc relative to the real 3D bcc. A requirement for such calculations is an approximate structure of the 3D object. In petrological applications, the known fabrics of rocks facilitate modeling. The size of the standard deviation depends on (1) the modal abundance of the phases, (2) the element concentration differences between phases and (3) the distribution of the phases, i.e. the homogeneity/heterogeneity of the object considered. A newly introduced parameter " τ" is used as a measure of this homogeneity/heterogeneity. Accessory phases, which do not necessarily appear in 2D thin sections, are a second source of error, in particular if they contain high concentrations of specific elements. An abundance of only 1 vol% of an accessory phase may raise the 3D bcc of an element by up to a factor of ˜8. The code can be queried as to whether broad beam, point, line or area analysis technique is best for obtaining 2D bcc. No general conclusion can be deduced, as the error rates of these techniques depend on the specific structure of the object considered. As an example chondrules—rapidly solidified melt droplets of chondritic meteorites—are used. It is demonstrated that 2D bcc may be used to reveal trends in the chemistry of 3D objects.

  1. Object Representation in Infants' Coordination of Manipulative Force

    ERIC Educational Resources Information Center

    Mash, Clay

    2007-01-01

    This study examined infants' use of object knowledge for scaling the manipulative force of object-directed actions. Infants 9, 12, and 15 months of age were outfitted with motion-analysis sensors on their arms and then presented with stimulus objects to examine individually over a series of familiarization trials. Two stimulus objects were used in…

  2. The Development of Symbolic Coordination: Representation of Imagined Objects, Executive Function, and Theory of Mind

    ERIC Educational Resources Information Center

    Dick, Anthony Steven; Overton, Willis F.; Kovacs, Stacie L.

    2005-01-01

    Children's developing competence with symbolic representations was assessed in 3 studies. Study 1 examined the hypothesis that the production of imaginary symbolic objects in pantomime requires the simultaneous coordination of the dual representations of a dynamic action and a symbolic object. We explored this coordination of symbolic…

  3. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The Apollo implementation of PLOT3D uses some of the capabilities of

  4. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The Apollo implementation of PLOT3D uses some of the capabilities of

  5. Object-oriented knowledge representation for expert systems

    NASA Technical Reports Server (NTRS)

    Scott, Stephen L.

    1991-01-01

    Object oriented techniques have generated considerable interest in the Artificial Intelligence (AI) community in recent years. This paper discusses an approach for representing expert system knowledge using classes, objects, and message passing. The implementation is in version 4.3 of NASA's C Language Integrated Production System (CLIPS), an expert system tool that does not provide direct support for object oriented design. The method uses programmer imposed conventions and keywords to structure facts, and rules to provide object oriented capabilities.

  6. 3D micro-XRF for cultural heritage objects: new analysis strategies for the investigation of the Dead Sea Scrolls.

    PubMed

    Mantouvalou, Ioanna; Wolff, Timo; Hahn, Oliver; Rabin, Ira; Lühl, Lars; Pagels, Marcel; Malzer, Wolfgang; Kanngiesser, Birgit

    2011-08-15

    A combination of 3D micro X-ray fluorescence spectroscopy (3D micro-XRF) and micro-XRF was utilized for the investigation of a small collection of highly heterogeneous, partly degraded Dead Sea Scroll parchment samples from known excavation sites. The quantitative combination of the two techniques proves to be suitable for the identification of reliable marker elements which may be used for classification and provenance studies. With 3D micro-XRF, the three-dimensional nature, i.e. the depth-resolved elemental composition as well as density variations, of the samples was investigated and bromine could be identified as a suitable marker element. It is shown through a comparison of quantitative and semiquantitative values for the bromine content derived using both techniques that, for elements which are homogeneously distributed in the sample matrix, quantification with micro-XRF using a one-layer model is feasible. Thus, the possibility for routine provenance studies using portable micro-XRF instrumentation on a vast amount of samples, even on site, is obtained through this work. PMID:21711051

  7. Objective Assessment and Design Improvement of a Staring, Sparse Transducer Array by the Spatial Crosstalk Matrix for 3D Photoacoustic Tomography

    PubMed Central

    Kosik, Ivan; Raess, Avery

    2015-01-01

    Accurate reconstruction of 3D photoacoustic (PA) images requires detection of photoacoustic signals from many angles. Several groups have adopted staring ultrasound arrays, but assessment of array performance has been limited. We previously reported on a method to calibrate a 3D PA tomography (PAT) staring array system and analyze system performance using singular value decomposition (SVD). The developed SVD metric, however, was impractical for large system matrices, which are typical of 3D PAT problems. The present study consisted of two main objectives. The first objective aimed to introduce the crosstalk matrix concept to the field of PAT for system design. Figures-of-merit utilized in this study were root mean square error, peak signal-to-noise ratio, mean absolute error, and a three dimensional structural similarity index, which were derived between the normalized spatial crosstalk matrix and the identity matrix. The applicability of this approach for 3D PAT was validated by observing the response of the figures-of-merit in relation to well-understood PAT sampling characteristics (i.e. spatial and temporal sampling rate). The second objective aimed to utilize the figures-of-merit to characterize and improve the performance of a near-spherical staring array design. Transducer arrangement, array radius, and array angular coverage were the design parameters examined. We observed that the performance of a 129-element staring transducer array for 3D PAT could be improved by selection of optimal values of the design parameters. The results suggested that this formulation could be used to objectively characterize 3D PAT system performance and would enable the development of efficient strategies for system design optimization. PMID:25875177

  8. Modeling of 3-D Object Manipulation by Multi-Joint Robot Fingers under Non-Holonomic Constraints and Stable Blind Grasping

    NASA Astrophysics Data System (ADS)

    Arimoto, Suguru; Yoshida, Morio; Bae, Ji-Hun

    This paper derives a mathematical model that expresses motion of a pair of multi-joint robot fingers with hemi-spherical rigid ends grasping and manipulating a 3-D rigid object with parallel flat surfaces. Rolling contacts arising between finger-ends and object surfaces are taken into consideration and modeled as Pfaffian constraints from which constraint forces emerge tangentially to the object surfaces. Another noteworthy difference of modeling of motion of a 3-D object from that of a 2-D object is that the instantaneous axis of rotation of the object is fixed in the 2-D case but that is time-varying in the 3-D case. A further difficulty that has prevented us to model 3-D physical interactions between a pair of fingers and a rigid object lies in the problem of treating spinning motion that may arise around the opposing axis from a contact point between one finger-end with one side of the object to another contact point. This paper shows that, once such spinning motion stops as the object mass center approaches just beneath the opposition axis, then this cease of spinning evokes a further nonholonomic constraint. Hence, the multi-body dynamics of the overall fingers-object system is subject to non-holonomic constraints concerning a 3-D orthogonal matrix expressing three mutually orthogonal unit vectors fixed at the object together with an extra non-holonomic constraint that the instantaneous axis of rotation of the object is always orthogonal to the opposing axis. It is shown that Lagrange's equation of motion of the overall system can be derived without violating the causality that governs the non-holonomic constraints. This immediately suggests possible construction of a numerical simulator of multi-body dynamics that can express motion of the fingers and object physically interactive to each other. By referring to the fact that human grasp an object in the form of precision prehension dynamically and stably by using opposable force between the thumb and another

  9. Behaviorally Relevant Abstract Object Identity Representation in the Human Parietal Cortex

    PubMed Central

    Jeong, Su Keun

    2016-01-01

    The representation of object identity is fundamental to human vision. Using fMRI and multivoxel pattern analysis, here we report the representation of highly abstract object identity information in human parietal cortex. Specifically, in superior intraparietal sulcus (IPS), a region previously shown to track visual short-term memory capacity, we found object identity representations for famous faces varying freely in viewpoint, hairstyle, facial expression, and age; and for well known cars embedded in different scenes, and shown from different viewpoints and sizes. Critically, these parietal identity representations were behaviorally relevant as they closely tracked the perceived face-identity similarity obtained in a behavioral task. Meanwhile, the task-activated regions in prefrontal and parietal cortices (excluding superior IPS) did not exhibit such abstract object identity representations. Unlike previous studies, we also failed to observe identity representations in posterior ventral and lateral visual object-processing regions, likely due to the greater amount of identity abstraction demanded by our stimulus manipulation here. Our MRI slice coverage precluded us from examining identity representation in anterior temporal lobe, a likely region for the computing of identity information in the ventral region. Overall, we show that human parietal cortex, part of the dorsal visual processing pathway, is capable of holding abstract and complex visual representations that are behaviorally relevant. These results argue against a “content-poor” view of the role of parietal cortex in attention. Instead, the human parietal cortex seems to be “content rich” and capable of directly participating in goal-driven visual information representation in the brain. SIGNIFICANCE STATEMENT The representation of object identity (including faces) is fundamental to human vision and shapes how we interact with the world. Although object representation has traditionally been

  10. On-line object feature extraction for multispectral scene representation

    NASA Technical Reports Server (NTRS)

    Ghassemian, Hassan; Landgrebe, David

    1988-01-01

    A new on-line unsupervised object-feature extraction method is presented that reduces the complexity and costs associated with the analysis of the multispectral image data and data transmission, storage, archival and distribution. The ambiguity in the object detection process can be reduced if the spatial dependencies, which exist among the adjacent pixels, are intelligently incorporated into the decision making process. The unity relation was defined that must exist among the pixels of an object. Automatic Multispectral Image Compaction Algorithm (AMICA) uses the within object pixel-feature gradient vector as a valuable contextual information to construct the object's features, which preserve the class separability information within the data. For on-line object extraction the path-hypothesis and the basic mathematical tools for its realization are introduced in terms of a specific similarity measure and adjacency relation. AMICA is applied to several sets of real image data, and the performance and reliability of features is evaluated.

  11. The "What" and "Where" of Object Representations in Infancy.

    ERIC Educational Resources Information Center

    Mareschal, Denis; Johnson, Mark H.

    2003-01-01

    Tested 4-month-olds' memory for surface feature and location information following brief occlusions. Found that when target objects were images of female faces or monochromatic asterisks, infants increased looking times following changes in identity or color but not changes in location or combinations of feature and location. When objects were…

  12. Convergent and invariant object representations for sight, sound, and touch.

    PubMed

    Man, Kingson; Damasio, Antonio; Meyer, Kaspar; Kaplan, Jonas T

    2015-09-01

    We continuously perceive objects in the world through multiple sensory channels. In this study, we investigated the convergence of information from different sensory streams within the cerebral cortex. We presented volunteers with three common objects via three different modalities-sight, sound, and touch-and used multivariate pattern analysis of functional magnetic resonance imaging data to map the cortical regions containing information about the identity of the objects. We could reliably predict which of the three stimuli a subject had seen, heard, or touched from the pattern of neural activity in the corresponding early sensory cortices. Intramodal classification was also successful in large portions of the cerebral cortex beyond the primary areas, with multiple regions showing convergence of information from two or all three modalities. Using crossmodal classification, we also searched for brain regions that would represent objects in a similar fashion across different modalities of presentation. We trained a classifier to distinguish objects presented in one modality and then tested it on the same objects presented in a different modality. We detected audiovisual invariance in the right temporo-occipital junction, audiotactile invariance in the left postcentral gyrus and parietal operculum, and visuotactile invariance in the right postcentral and supramarginal gyri. Our maps of multisensory convergence and crossmodal generalization reveal the underlying organization of the association cortices, and may be related to the neural basis for mental concepts. PMID:26047030

  13. Multidimensional representation of objects-The influence of task demands.

    PubMed

    Goldfarb, L; Sabah, K

    2016-04-01

    In our daily life, we often encounter situations in which different features of several multidimensional objects must be perceived simultaneously. There are two types of environments of this kind: environments with multidimensional objects that have unique feature associations, and environments with multidimensional objects that have mixed feature associations. Recently, we (Goldfarb & Treisman, 2013) described the association effect, suggesting that the latter type causes behavioral perception difficulties. In the present study, we investigated this effect further by examining whether the effect is determined via a feedforward visual path or via a high-order task demand component. In order to test this question, in Experiment 1 a set of multidimensional objects were presented while we manipulated the letter case of a target feature, thus creating a visually different but semantically equivalent object, in terms of its identity. Similarly, in Experiment 2 artificial groups with different physical properties were created according to the task demands. The results indicated that the association effect is determined by the task demands, which create the group of reference. The importance of high-order task demand components in the association effect is further discussed, as well as the possible role of the neural synchrony of object files in explaining this effect. PMID:26163190

  14. RAG-3D: a search tool for RNA 3D substructures.

    PubMed

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-10-30

    To address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D-a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool-designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding. PMID:26304547

  15. Repetition Blindness Reveals Differences between the Representations of Manipulable and Nonmanipulable Objects

    ERIC Educational Resources Information Center

    Harris, Irina M.; Murray, Alexandra M.; Hayward, William G.; O'Callaghan, Claire; Andrews, Sally

    2012-01-01

    We used repetition blindness to investigate the nature of the representations underlying identification of manipulable objects. Observers named objects presented in rapid serial visual presentation streams containing either manipulable or nonmanipulable objects. In half the streams, 1 object was repeated. Overall accuracy was lower when streams…

  16. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    NASA Astrophysics Data System (ADS)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera

  17. A Double-Dissociation in Infants' Representations of Object Arrays

    ERIC Educational Resources Information Center

    Feigenson, L.

    2005-01-01

    Previous studies show that infants can compute either the total continuous extent (e.g. Clearfield, M.W., & Mix, K.S. (1999). Number versus contour length in infants' discrimination of small visual sets. Psychological Science, 10(5), 408-411; Feigenson, L., & Carey, S. (2003). Tracking individuals via object-files: evidence from infants' manual…

  18. Mechanisms Underlying the Emergence of Object Representations during Infancy

    ERIC Educational Resources Information Center

    Scott, Lisa S.

    2011-01-01

    The effects of individual versus category training, using behavioral indices of stimulus discrimination and neural ERPs indices of holistic processing, were examined in infants. Following pretraining assessments at 6 months, infants were sent home with training books of objects for 3 months. One group of infants was trained with six different…

  19. An application of object-oriented knowledge representation to engineering expert systems

    NASA Technical Reports Server (NTRS)

    Logie, D. S.; Kamil, H.; Umaretiya, J. R.

    1990-01-01

    The paper describes an object-oriented knowledge representation and its application to engineering expert systems. The object-oriented approach promotes efficient handling of the problem data by allowing knowledge to be encapsulated in objects and organized by defining relationships between the objects. An Object Representation Language (ORL) was implemented as a tool for building and manipulating the object base. Rule-based knowledge representation is then used to simulate engineering design reasoning. Using a common object base, very large expert systems can be developed, comprised of small, individually processed, rule sets. The integration of these two schemes makes it easier to develop practical engineering expert systems. The general approach to applying this technology to the domain of the finite element analysis, design, and optimization of aerospace structures is discussed.

  20. How Category Learning Affects Object Representations: Not All Morphspaces Stretch Alike

    ERIC Educational Resources Information Center

    Folstein, Jonathan R.; Gauthier, Isabel; Palmeri, Thomas J.

    2012-01-01

    How does learning to categorize objects affect how people visually perceive them? Behavioral, neurophysiological, and neuroimaging studies have tested the degree to which category learning influences object representations, with conflicting results. Some studies have found that objects become more visually discriminable along dimensions relevant…

  1. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  2. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  3. RAG-3D: A search tool for RNA 3D substructures

    DOE PAGESBeta

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally describedmore » in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.« less

  4. RAG-3D: A search tool for RNA 3D substructures

    SciTech Connect

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.

  5. RAG-3D: a search tool for RNA 3D substructures

    PubMed Central

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-01-01

    To address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding. PMID:26304547

  6. Transforming clinical imaging and 3D data for virtual reality learning objects: HTML5 and mobile devices implementation.

    PubMed

    Trelease, Robert B; Nieder, Gary L

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android tablets. This article describes complementary methods for creating comparable, multiplatform VR learning objects in the new HTML5 standard format, circumventing platform-specific limitations imposed by the QuickTime VR multimedia file format. Multiple types or "dimensions" of anatomical information can be embedded in such learning objects, supporting different kinds of online learning applications, including interactive atlases, examination questions, and complex, multi-structure presentations. Such HTML5 VR learning objects are usable on new mobile devices that do not support QuickTime VR, as well as on personal computers. Furthermore, HTML5 VR learning objects can be embedded in "ebook" document files, supporting the development of new types of electronic textbooks on mobile devices that are increasingly popular and self-adopted for mobile learning. PMID:23212750

  7. X-ray 3D computed tomography of large objects: investigation of an ancient globe created by Vincenzo Coronelli

    NASA Astrophysics Data System (ADS)

    Morigi, Maria Pia; Casali, Franco; Berdondini, Andrea; Bettuzzi, Matteo; Bianconi, Davide; Brancaccio, Rosa; Castellani, Alice; D'Errico, Vincenzo; Pasini, Alessandro; Rossi, Alberto; Labanti, C.; Scianna, Nicolangelo

    2007-07-01

    X-ray cone-beam Computed Tomography is a powerful tool for the non-destructive investigation of the inner structure of works of art. With regard to Cultural Heritage conservation, different kinds of objects have to be inspected in order to acquire significant information such as the manufacturing technique or the presence of defects and damages. The knowledge of these features is very useful for determining adequate maintenance and restoration procedures. The use of medical CT scanners gives good results only when the investigated objects have size and density similar to those of the human body, however this requirement is not always fulfilled in Cultural Heritage diagnostics. For this reason a system for Digital Radiography and Computed Tomography of large objects, especially works of art, has been recently developed by researchers of the Physics Department of the University of Bologna. The design of the system is very different from any commercial available CT machine. The system consists of a 200 kVp X-ray source, a detector and a motorized mechanical structure for moving the detector and the object in order to collect the required number of radiographic projections. The detector is made up of a 450x450 mm2 structured CsI(Tl) scintillating screen, optically coupled to a CCD camera. In this paper we will present the results of the tomographic investigation recently performed on an ancient globe, created by the famous cosmographer, cartographer and encyclopedist Vincenzo Coronelli.

  8. Transforming Clinical Imaging and 3D Data for Virtual Reality Learning Objects: HTML5 and Mobile Devices Implementation

    ERIC Educational Resources Information Center

    Trelease, Robert B.; Nieder, Gary L.

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android…

  9. Strength of object representation: its key role in object-based attention for determining the competition result between Gestalt and top-down objects.

    PubMed

    Zhao, Jingjing; Wang, Yonghui; Liu, Donglai; Zhao, Liang; Liu, Peng

    2015-10-01

    It was found in previous studies that two types of objects (rectangles formed according to the Gestalt principle and Chinese words formed in a top-down fashion) can both induce an object-based effect. The aim of the present study was to investigate how the strength of an object representation affects the result of the competition between these two types of objects based on research carried out by Liu, Wang and Zhou [(2011) Acta Psychologica, 138(3), 397-404]. In Experiment 1, the rectangles were filled with two different colors to increase the strength of Gestalt object representation, and we found that the object effect changed significantly for the different stimulus types. Experiment 2 used Chinese words with various familiarities to manipulate the strength of the top-down object representation. As a result, the object-based effect induced by rectangles was observed only when the Chinese word familiarity was low. These results suggest that the strength of object representation determines the result of competition between different types of objects. PMID:26041271

  10. 3D-Modeling of deformed halite hopper crystals: Object based image analysis and support vector machine, a first evaluation

    NASA Astrophysics Data System (ADS)

    Leitner, Christoph; Hofmann, Peter; Marschallinger, Robert

    2014-05-01

    Halite hopper crystals are thought to develop by displacive growth in unconsolidated mud (Gornitz & Schreiber, 1984). The Alpine Haselgebirge, but also e.g. the salt deposits of the Rhine graben (mined at the beginning of the 20th century), comprise hopper crystals with shapes of cuboids, parallelepipeds and rhombohedrons (Görgey, 1912). Obviously, they deformed under oriented stress, which had been tried to reconstruct with respect to the sedimentary layering (Leitner et al., 2013). In the present work, deformed halite hopper crystals embedded in mudrock were automated reconstructed. Object based image analysis (OBIA) has been used successfully in remote sensing for 2D images before. The present study represents the first time that the method was used for reconstruction of three dimensional geological objects. First, manually a reference (gold standard) was created by redrawing contours of the halite crystals on each HRXCT scanning slice. Then, for OBIA, the computer program eCognition was used. For the automated reconstruction a rule set was developed. Thereby, the strength of OBIA was to recognize all objects similar to halite hopper crystals and in particular to eliminate cracks. In a second step, all the objects unsuitable for a structural deformation analysis were dismissed using a support vector machine (SVM) (clusters, polyhalite-coated crystals and spherical halites) The SVM simultaneously drastically reduced the number of halites. From 184 OBIA-objects 67 well shaped remained, which comes close to the number of pre-selected 52 objects. To assess the accuracy of the automated reconstruction, the result before and after SVM was compared to the reference, i.e. the gold standard. State-of the art per-scene statistics were extended to a per-object statistics. Görgey R (1912) Zur Kenntnis der Kalisalzlager von Wittelsheim im Ober-Elsaß. Tschermaks Mineral Petrogr Mitt 31:339-468 Gornitz VM, Schreiber BC (1981) Displacive halite hoppers from the dead sea

  11. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  12. Distance to the object and social representations: replication and further evidences.

    PubMed

    Dany, Lionel; Apostolidis, Themis; Harabi, Sofiene

    2014-01-01

    Distance to the object is a new approach that highlights the complex nature of the link between groups and social representations. It is composed of three elements: knowledge, involvement, and level of practices associated with the social object. This study aims to replicate a previous study that has demonstrated the validity of distance to the object in order to explore social representations of cannabis. We carried out a research on the social representations of cocaine. Respondents (n = 200) completed a questionnaire including opinions related to cocaine and constitutive elements of the distance to cocaine. The regression analysis on the representational dimensions revealed a significant effect of the distance variable on two dimensions (social facilitator, addiction and social dangerousness). The groups that were "distant" from the object showed stronger adherence to the normative component than to the functional component of SR, in opposition to those who were "close" to the object. The concept of distance to the object is thus heuristic as it offers an integrative grid of reading that permits to understand and highlight the link individuals maintain with a social representation. PMID:26054492

  13. Integration of object-oriented knowledge representation with the CLIPS rule based system

    NASA Technical Reports Server (NTRS)

    Logie, David S.; Kamil, Hasan

    1990-01-01

    The paper describes a portion of the work aimed at developing an integrated, knowledge based environment for the development of engineering-oriented applications. An Object Representation Language (ORL) was implemented in C++ which is used to build and modify an object-oriented knowledge base. The ORL was designed in such a way so as to be easily integrated with other representation schemes that could effectively reason with the object base. Specifically, the integration of the ORL with the rule based system C Language Production Systems (CLIPS), developed at the NASA Johnson Space Center, will be discussed. The object-oriented knowledge representation provides a natural means of representing problem data as a collection of related objects. Objects are comprised of descriptive properties and interrelationships. The object-oriented model promotes efficient handling of the problem data by allowing knowledge to be encapsulated in objects. Data is inherited through an object network via the relationship links. Together, the two schemes complement each other in that the object-oriented approach efficiently handles problem data while the rule based knowledge is used to simulate the reasoning process. Alone, the object based knowledge is little more than an object-oriented data storage scheme; however, the CLIPS inference engine adds the mechanism to directly and automatically reason with that knowledge. In this hybrid scheme, the expert system dynamically queries for data and can modify the object base with complete access to all the functionality of the ORL from rules.

  14. 3D multi-object segmentation of cardiac MSCT imaging by using a multi-agent approach.

    PubMed

    Fleureau, Julien; Garreau, Mireille; Boulmier, Dominique; Hernández, Alfredo

    2007-01-01

    We propose a new technique for general purpose, semi-interactive and multi-object segmentation in N-dimensional images, applied to the extraction of cardiac structures in MultiSlice Computed Tomography (MSCT) imaging. The proposed approach makes use of a multi-agent scheme combined with a supervised classification methodology allowing the introduction of a priori information and presenting fast computing times. The multi-agent system is organised around a communicating agent which manages a population of situated agents which segment the image through cooperative and competitive interactions. The proposed technique has been tested on several patient data sets. Some typical results are finally presented and discussed. PMID:18003382

  15. 3D Multi-Object Segmentation of Cardiac MSCT Imaging by using a Multi-Agent Approach

    PubMed Central

    Fleureau, Julien; Garreau, Mireille; Boulmier, Dominique; Hernandez, Alfredo

    2007-01-01

    We propose a new technique for general purpose, semi-interactive and multi-object segmentation in N-dimensional images, applied to the extraction of cardiac structures in MultiSlice Computed Tomography (MSCT) imaging. The proposed approach makes use of a multi-agent scheme combined with a supervised classification methodology allowing the introduction of a priori information and presenting fast computing times. The multi-agent system is organised around a communicating agent which manages a population of situated agents which segment the image through cooperative and competitive interactions. The proposed technique has been tested on several patient data sets. Some typical results are finally presented and discussed. PMID:18003382

  16. Visual discrimination of rotated 3D objects in Malawi cichlids (Pseudotropheus sp.): a first indication for form constancy in fishes.

    PubMed

    Schluessel, V; Kraniotakes, H; Bleckmann, H

    2014-03-01

    Fish move in a three-dimensional environment in which it is important to discriminate between stimuli varying in colour, size, and shape. It is also advantageous to be able to recognize the same structures or individuals when presented from different angles, such as back to front or front to side. This study assessed visual discrimination abilities of rotated three-dimensional objects in eight individuals of Pseudotropheus sp. using various plastic animal models. All models were displayed in two choice experiments. After successful training, fish were presented in a range of transfer tests with objects rotated in the same plane and in space by 45° and 90° to the side or to the front. In one experiment, models were additionally rotated by 180°, i.e., shown back to front. Fish showed quick associative learning and with only one exception successfully solved and finished all experimental tasks. These results provide first evidence for form constancy in this species and in fish in general. Furthermore, Pseudotropheus seemed to be able to categorize stimuli; a range of turtle and frog models were recognized independently of colour and minor shape variations. Form constancy and categorization abilities may be important for behaviours such as foraging, recognition of predators, and conspecifics as well as for orienting within habitats or territories. PMID:23982620

  17. 3D Visualization Development of SIUE Campus

    NASA Astrophysics Data System (ADS)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  18. Qualitative Differences in the Representation of Spatial Relations for Different Object Classes

    ERIC Educational Resources Information Center

    Cooper, Eric E.; Brooks, Brian E.

    2004-01-01

    Two experiments investigated whether the representations used for animal, produce, and object recognition code spatial relations in a similar manner. Experiment 1 tested the effects of planar rotation on the recognition of animals and nonanimal objects. Response times for recognizing animals followed an inverted U-shaped function, whereas those…

  19. On the Dynamics of Action Representations Evoked by Names of Manipulable Objects

    ERIC Educational Resources Information Center

    Bub, Daniel N.; Masson, Michael E. J.

    2012-01-01

    Two classes of hand action representations are shown to be activated by listening to the name of a manipulable object (e.g., cellphone). The functional action associated with the proper use of an object is evoked soon after the onset of its name, as indicated by primed execution of that action. Priming is sustained throughout the duration of the…

  20. The Verbal Nature of Representations of the Canonical Colors of Objects

    ERIC Educational Resources Information Center

    Gleason, Tracy R.; Fiske, Kate E.; Chan, Ruth K.

    2004-01-01

    In selecting the canonical colors of color-specific objects, children may use verbal mediation, a cognitive process whereby an object and its color are matched using verbal rather than pictorial representation [British Journal of Developmental Psychology 14 (1996) 339]. To investigate this process, 108 2- to 5-year-old children were asked to…

  1. Mirror-Image Confusions: Implications for Representation and Processing of Object Orientation

    ERIC Educational Resources Information Center

    Gregory, Emma; McCloskey, Michael

    2010-01-01

    Perceiving the orientation of objects is important for interacting with the world, yet little is known about the mental representation or processing of object orientation information. The tendency of humans and other species to confuse mirror images provides a potential clue. However, the appropriate characterization of this phenomenon is not…

  2. Delaunay-Object-Dynamics: cell mechanics with a 3D kinetic and dynamic weighted Delaunay-triangulation.

    PubMed

    Meyer-Hermann, Michael

    2008-01-01

    Mathematical methods in Biology are of increasing relevance for understanding the control and the dynamics of biological systems with medical relevance. In particular, agent-based methods turn more and more important because of fast increasing computational power which makes even large systems accessible. An overview of different mathematical methods used in Theoretical Biology is provided and a novel agent-based method for cell mechanics based on Delaunay-triangulations and Voronoi-tessellations is explained in more detail: The Delaunay-Object-Dynamics method. It is claimed that the model combines physically realistic cell mechanics with a reasonable computational load. The power of the approach is illustrated with two examples, avascular tumor growth and genesis of lymphoid tissue in a cell-flow equilibrium. PMID:18023735

  3. Emerging Object Representations in the Visual System Predict Reaction Times for Categorization

    PubMed Central

    Ritchie, J. Brendan; Tovar, David A.; Carlson, Thomas A.

    2015-01-01

    Recognizing an object takes just a fraction of a second, less than the blink of an eye. Applying multivariate pattern analysis, or “brain decoding”, methods to magnetoencephalography (MEG) data has allowed researchers to characterize, in high temporal resolution, the emerging representation of object categories that underlie our capacity for rapid recognition. Shortly after stimulus onset, object exemplars cluster by category in a high-dimensional activation space in the brain. In this emerging activation space, the decodability of exemplar category varies over time, reflecting the brain’s transformation of visual inputs into coherent category representations. How do these emerging representations relate to categorization behavior? Recently it has been proposed that the distance of an exemplar representation from a categorical boundary in an activation space is critical for perceptual decision-making, and that reaction times should therefore correlate with distance from the boundary. The predictions of this distance hypothesis have been born out in human inferior temporal cortex (IT), an area of the brain crucial for the representation of object categories. When viewed in the context of a time varying neural signal, the optimal time to “read out” category information is when category representations in the brain are most decodable. Here, we show that the distance from a decision boundary through activation space, as measured using MEG decoding methods, correlates with reaction times for visual categorization during the period of peak decodability. Our results suggest that the brain begins to read out information about exemplar category at the optimal time for use in choice behaviour, and support the hypothesis that the structure of the representation for objects in the visual system is partially constitutive of the decision process in recognition. PMID:26107634

  4. Top-down attention based on object representation and incremental memory for knowledge building and inference.

    PubMed

    Kim, Bumhwi; Ban, Sang-Woo; Lee, Minho

    2013-10-01

    Humans can efficiently perceive arbitrary visual objects based on an incremental learning mechanism with selective attention. This paper proposes a new task specific top-down attention model to locate a target object based on its form and color representation along with a bottom-up saliency based on relativity of primitive visual features and some memory modules. In the proposed model top-down bias signals corresponding to the target form and color features are generated, which draw the preferential attention to the desired object by the proposed selective attention model in concomitance with the bottom-up saliency process. The object form and color representation and memory modules have an incremental learning mechanism together with a proper object feature representation scheme. The proposed model includes a Growing Fuzzy Topology Adaptive Resonance Theory (GFTART) network which plays two important roles in object color and form biased attention; one is to incrementally learn and memorize color and form features of various objects, and the other is to generate a top-down bias signal to localize a target object by focusing on the candidate local areas. Moreover, the GFTART network can be utilized for knowledge inference which enables the perception of new unknown objects on the basis of the object form and color features stored in the memory during training. Experimental results show that the proposed model is successful in focusing on the specified target objects, in addition to the incremental representation and memorization of various objects in natural scenes. In addition, the proposed model properly infers new unknown objects based on the form and color features of previously trained objects. PMID:23624577

  5. A Unified Representation Scheme for Solid Geometric Objects Using B-splines (extended Abstract)

    NASA Technical Reports Server (NTRS)

    Bahler, D.

    1985-01-01

    A geometric representation scheme called the B-spline cylinder, which consists of interpolation between pairs of uniform periodic cubic B-spline curves is discussed. This approach carries a number of interesting implications. For one, a single relatively simple database schema can be used to represent a reasonably large class of objects, since the spline representation is flexible enough to allow a large domain of representable objects at very little cost in data complexity. The model is thus very storage-efficient. A second feature of such a system is that it reduces to one the number of routines which the system must support to perform a given operation on objects. Third, the scheme enables easy conversion to and from other representations. The formal definition of the cylinder entity is given. In the geometric properties of the entity are explored and several operations on such objects are defined. Some general purpose criteria for evaluating any geometric representation scheme are introduced and the B-spline cylinder scheme according to these criteria is evaluated.

  6. Demonstration of a 3D vision algorithm for space applications

    NASA Technical Reports Server (NTRS)

    Defigueiredo, Rui J. P. (Editor)

    1987-01-01

    This paper reports an extension of the MIAG algorithm for recognition and motion parameter determination of general 3-D polyhedral objects based on model matching techniques and using movement invariants as features of object representation. Results of tests conducted on the algorithm under conditions simulating space conditions are presented.

  7. Development of visual 3D virtual environment for control software

    NASA Technical Reports Server (NTRS)

    Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence

    1991-01-01

    Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D

  8. 3-D visualisation of palaeoseismic trench stratigraphy and trench logging using terrestrial remote sensing and GPR - combining techniques towards an objective multiparametric interpretation

    NASA Astrophysics Data System (ADS)

    Schneiderwind, S.; Mason, J.; Wiatr, T.; Papanikolaou, I.; Reicherter, K.

    2015-09-01

    Two normal faults on the Island of Crete and mainland Greece were studied to create and test an innovative workflow to make palaeoseismic trench logging more objective, and visualise the sedimentary architecture within the trench wall in 3-D. This is achieved by combining classical palaeoseismic trenching techniques with multispectral approaches. A conventional trench log was firstly compared to results of iso cluster analysis of a true colour photomosaic representing the spectrum of visible light. Passive data collection disadvantages (e.g. illumination) were addressed by complementing the dataset with active near-infrared backscatter signal image from t-LiDAR measurements. The multispectral analysis shows that distinct layers can be identified and it compares well with the conventional trench log. According to this, a distinction of adjacent stratigraphic units was enabled by their particular multispectral composition signature. Based on the trench log, a 3-D-interpretation of GPR data collected on the vertical trench wall was then possible. This is highly beneficial for measuring representative layer thicknesses, displacements and geometries at depth within the trench wall. Thus, misinterpretation due to cutting effects is minimised. Sedimentary feature geometries related to earthquake magnitude can be used to improve the accuracy of seismic hazard assessments. Therefore, this manuscript combines multiparametric approaches and shows: (i) how a 3-D visualisation of palaeoseismic trench stratigraphy and logging can be accomplished by combining t-LiDAR and GRP techniques, and (ii) how a multispectral digital analysis can offer additional advantages and a higher objectivity in the interpretation of palaeoseismic and stratigraphic information. The multispectral datasets are stored allowing unbiased input for future (re-)investigations.

  9. A rudimentary database for three-dimensional objects using structural representation

    NASA Technical Reports Server (NTRS)

    Sowers, James P.

    1987-01-01

    A database which enables users to store and share the description of three-dimensional objects in a research environment is presented. The main objective of the design is to make it a compact structure that holds sufficient information to reconstruct the object. The database design is based on an object representation scheme which is information preserving, reasonably efficient, and yet economical in terms of the storage requirement. The determination of the needed data for the reconstruction process is guided by the belief that it is faster to do simple computations to generate needed data/information for construction than to retrieve everything from memory. Some recent techniques of three-dimensional representation that influenced the design of the database are discussed. The schema for the database and the structural definition used to define an object are given. The user manual for the software developed to create and maintain the contents of the database is included.

  10. Disentangling Representations of Object Shape and Object Category in Human Visual Cortex: The Animate-Inanimate Distinction.

    PubMed

    Proklova, Daria; Kaiser, Daniel; Peelen, Marius V

    2016-05-01

    Objects belonging to different categories evoke reliably different fMRI activity patterns in human occipitotemporal cortex, with the most prominent distinction being that between animate and inanimate objects. An unresolved question is whether these categorical distinctions reflect category-associated visual properties of objects or whether they genuinely reflect object category. Here, we addressed this question by measuring fMRI responses to animate and inanimate objects that were closely matched for shape and low-level visual features. Univariate contrasts revealed animate- and inanimate-preferring regions in ventral and lateral temporal cortex even for individually matched object pairs (e.g., snake-rope). Using representational similarity analysis, we mapped out brain regions in which the pairwise dissimilarity of multivoxel activity patterns (neural dissimilarity) was predicted by the objects' pairwise visual dissimilarity and/or their categorical dissimilarity. Visual dissimilarity was measured as the time it took participants to find a unique target among identical distractors in three visual search experiments, where we separately quantified overall dissimilarity, outline dissimilarity, and texture dissimilarity. All three visual dissimilarity structures predicted neural dissimilarity in regions of visual cortex. Interestingly, these analyses revealed several clusters in which categorical dissimilarity predicted neural dissimilarity after regressing out visual dissimilarity. Together, these results suggest that the animate-inanimate organization of human visual cortex is not fully explained by differences in the characteristic shape or texture properties of animals and inanimate objects. Instead, representations of visual object properties and object category may coexist in more anterior parts of the visual system. PMID:26765944

  11. Combined robotic-aided gait training and 3D gait analysis provide objective treatment and assessment of gait in children and adolescents with Acquired Hemiplegia.

    PubMed

    Molteni, Erika; Beretta, Elena; Altomonte, Daniele; Formica, Francesca; Strazzer, Sandra

    2015-08-01

    To evaluate the feasibility of a fully objective rehabilitative and assessment process of the gait abilities in children suffering from Acquired Hemiplegia (AH), we studied the combined employment of robotic-aided gait training (RAGT) and 3D-Gait Analysis (GA). A group of 12 patients with AH underwent 20 sessions of RAGT in addition to traditional manual physical therapy (PT). All the patients were evaluated before and after the training by using the Gross Motor Function Measures (GMFM), the Functional Assessment Questionnaire (FAQ), and the 6 Minutes Walk Test. They also received GA before and after RAGT+PT. Finally, results were compared with those obtained from a control group of 3 AH children who underwent PT only. After the training, the GMFM and FAQ showed significant improvement in patients receiving RAGT+PT. GA highlighted significant improvement in stance symmetry and step length of the affected limb. Moreover, pelvic tilt increased, and hip kinematics on the sagittal plane revealed statistically significant increase in the range of motion during the hip flex-extension. Our data suggest that the combined program RAGT+PT induces improvements in functional activities and gait pattern in children with AH, and it demonstrates that the combined employment of RAGT and 3D-GA ensures a fully objective rehabilitative program. PMID:26737310

  12. Hip2Norm: an object-oriented cross-platform program for 3D analysis of hip joint morphology using 2D pelvic radiographs.

    PubMed

    Zheng, G; Tannast, M; Anderegg, C; Siebenrock, K A; Langlotz, F

    2007-07-01

    We developed an object-oriented cross-platform program to perform three-dimensional (3D) analysis of hip joint morphology using two-dimensional (2D) anteroposterior (AP) pelvic radiographs. Landmarks extracted from 2D AP pelvic radiographs and optionally an additional lateral pelvic X-ray were combined with a cone beam projection model to reconstruct 3D hip joints. Since individual pelvic orientation can vary considerably, a method for standardizing pelvic orientation was implemented to determine the absolute tilt/rotation. The evaluation of anatomically morphologic differences was achieved by reconstructing the projected acetabular rim and the measured hip parameters as if obtained in a standardized neutral orientation. The program had been successfully used to interactively objectify acetabular version in hips with femoro-acetabular impingement or developmental dysplasia. Hip(2)Norm is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway) for graphical user interface (GUI) and is transportable to any platform. PMID:17499878

  13. Storing a 3d City Model, its Levels of Detail and the Correspondences Between Objects as a 4d Combinatorial Map

    NASA Astrophysics Data System (ADS)

    Arroyo Ohori, K.; Ledoux, H.; Stoter, J.

    2015-10-01

    3D city models of the same region at multiple LODs are encumbered by the lack of links between corresponding objects across LODs. In practice, this causes inconsistency during updates and maintenance problems. A radical solution to this problem is to model the LOD of a model as a dimension in the geometric sense, such that a set of connected polyhedra at a series of LODs is modelled as a single polychoron—the 4D analogue of a polyhedron. This approach is generally used only conceptually and then discarded at the implementation stage, losing many of its potential advantages in the process. This paper therefore shows that this approach can be instead directly realised using 4D combinatorial maps, making it possible to store all topological relationships between objects.

  14. The Game Object Model and Expansive Learning: Creation, Instantiation, Expansion, and Re-representation

    ERIC Educational Resources Information Center

    Amory, Alan; Molomo, Bolepo; Blignaut, Seugnet

    2011-01-01

    In this paper, the collaborative development, instantiation, expansion and re-representation as research instrument of the Game Object Model (GOM) are explored from a Cultural Historical Activity Theory perspective. The aim of the paper is to develop insights into the design, integration, evaluation and use of video games in learning and teaching.…

  15. On Having Complex Representations of Things: Preschoolers Use Multiple Words for Objects and People.

    ERIC Educational Resources Information Center

    Deak, Gedeon O.; Maratsos, Michael

    1998-01-01

    Two experiments examined preschoolers' ability to apply multiple labels to representational objects and to people. Found that preschoolers reliably produced or accepted several words per entity and accepted a high percentage of class-inclusive and overlapping word pairs. The mean number of words produced in labeling task was related to receptive…

  16. The Nature of Experience Determines Object Representations in the Visual System

    ERIC Educational Resources Information Center

    Wong, Yetta K.; Folstein, Jonathan R.; Gauthier, Isabel

    2012-01-01

    Visual perceptual learning (PL) and perceptual expertise (PE) traditionally lead to different training effects and recruit different brain areas, but reasons for these differences are largely unknown. Here, we tested how the learning history influences visual object representations. Two groups were trained with tasks typically used in PL or PE…

  17. Distributed Representation of Visual Objects by Single Neurons in the Human Brain

    PubMed Central

    Valdez, André B.; Papesh, Megan H.; Treiman, David M.; Smith, Kris A.; Goldinger, Stephen D.

    2015-01-01

    It remains unclear how single neurons in the human brain represent whole-object visual stimuli. While recordings in both human and nonhuman primates have shown distributed representations of objects (many neurons encoding multiple objects), recordings of single neurons in the human medial temporal lobe, taken as subjects' discriminated objects during multiple presentations, have shown gnostic representations (single neurons encoding one object). Because some studies suggest that repeated viewing may enhance neural selectivity for objects, we had human subjects discriminate objects in a single, more naturalistic viewing session. We found that, across 432 well isolated neurons recorded in the hippocampus and amygdala, the average fraction of objects encoded was 26%. We also found that more neurons encoded several objects versus only one object in the hippocampus (28 vs 18%, p < 0.001) and in the amygdala (30 vs 19%, p < 0.001). Thus, during realistic viewing experiences, typical neurons in the human medial temporal lobe code for a considerable range of objects, across multiple semantic categories. PMID:25834044

  18. Manipulation After Object Rotation Reveals Independent Sensorimotor Memory Representations of Digit Positions and Forces

    PubMed Central

    Zhang, Wei; Gordon, Andrew M.; Fu, Qiushi

    2010-01-01

    Planning of object manipulations is dependent on the ability to generate, store, and retrieve sensorimotor memories of previous actions associated with grasped objects. However, the sensorimotor memory representations linking object properties to the planning of grasp are not well understood. Here we use an object rotation task to gain insight into the mechanisms underlying the nature of these sensorimotor memories. We asked subjects to grasp a grip device with an asymmetrical center of mass (CM) anywhere on its vertical surfaces and lift it while minimizing object roll. After subjects learned to minimize object roll by generating a compensatory moment, they were asked to rotate the object 180° about a vertical axis and lift it again. The rotation resulted in changing the direction of external moment opposite to that experienced during the prerotation block. Anticipatory grasp control was quantified by measuring the compensatory moment generated at object lift onset by thumb and index finger forces through their respective application points. On the first postrotation trial, subjects failed to generate a compensatory moment to counter the external moment caused by the new CM location, thus resulting in a large object roll. Nevertheless, after several object rotations subjects reduced object roll on the initial postrotation trials by anticipating the new CM location through the modulation of digit placement but not tangential forces. The differential improvement in modulating these two variables supports the notion of independent memory representations of kinematics and kinetics and is discussed in relation to neural mechanisms underlying visuomotor transformations. PMID:20357064

  19. Object shape classification and scene shape representation for three-dimensional laser scanned outdoor data

    NASA Astrophysics Data System (ADS)

    Ning, Xiaojuan; Wang, Yinghui; Zhang, Xiaopeng

    2013-02-01

    Shape analysis of a three-dimensional (3-D) scene is an important issue and could be widely used for various applications: city planning, robot navigation, virtual tourism, etc. We introduce an approach for understanding the primitive shape of the scene to reveal the semantic scene shape structure and represent the scene using shape elements. The scene objects are labeled and recognized using the geometric and semantic features for each cluster, which is based on the knowledge of scene. Furthermore, the object in scene with a different primitive shape could also be classified and fitted using the Gaussian map of the segmented scene. We demonstrate the presented approach on several complex scenes from laser scanning. According to the experimental result, the proposed method can accurately represent the geometric structure of the 3-D scene.

  20. Multivoxel Object Representations in Adult Human Visual Cortex Are Flexible: An Associative Learning Study.

    PubMed

    Senoussi, Mehdi; Berry, Isabelle; VanRullen, Rufin; Reddy, Leila

    2016-06-01

    Learning associations between co-occurring events enables us to extract structure from our environment. Medial-temporal lobe structures are critical for associative learning. However, the role of the ventral visual pathway (VVP) in associative learning is not clear. Do multivoxel object representations in the VVP reflect newly formed associations? We show that VVP multivoxel representations become more similar to each other after human participants learn arbitrary new associations between pairs of unrelated objects (faces, houses, cars, chairs). Participants were scanned before and after 15 days of associative learning. To evaluate how object representations changed, a classifier was trained on discriminating two nonassociated categories (e.g., faces/houses) and tested on discriminating their paired associates (e.g., cars/chairs). Because the associations were arbitrary and counterbalanced across participants, there was initially no particular reason for this cross-classification decision to tend toward either alternative. Nonetheless, after learning, cross-classification performance increased in the VVP (but not hippocampus), on average by 3.3%, with some voxels showing increases of up to 10%. For example, a chair multivoxel representation that initially resembled neither face nor house representations was, after learning, classified as more similar to that of faces for participants who associated chairs with faces and to that of houses for participants who associated chairs with houses. Additionally, learning produced long-lasting perceptual consequences. In a behavioral priming experiment performed several months later, the change in cross-classification performance was correlated with the degree of priming. Thus, VVP multivoxel representations are not static but become more similar to each other after associative learning. PMID:26836513

  1. A Modified Exoskeleton for 3D Shape Description and Recognition

    NASA Astrophysics Data System (ADS)

    Lipikorn, Rajalida; Shimizu, Akinobu; Hagihara, Yoshihiro; Kobatake, Hidefumi

    Three-dimensional(3D) shape representation is a powerful tool in object recognition that is an essential process in an image processing and analysis system. Skeleton is one of the most widely used representations for object recognition, nevertheless most of the skeletons obtained from conventional methods are susceptible to rotation and noise disturbances. In this paper, we present a new 3D object representation called a modified exoskeleton (mES) which preserves skeleton properties including significant characteristics about an object that are meaningful for object recognition, and is more stable and less susceptible to rotation and noise than the skeletons. Then a 3D shape recognition methodology which determines the similarity between an observed object and other known objects in a database is introduced. Through a number of experiments on 3D artificial objects and real volumetric lung tumors extracted from CT images, it can be verified that our proposed methodology based on the mES is a simple yet efficient method that is less sensitive to rotation, noise, and independent of orientation and size of the objects.

  2. Information Object Definition–based Unified Modeling Language Representation of DICOM Structured Reporting

    PubMed Central

    Tirado-Ramos, Alfredo; Hu, Jingkun; Lee, K.P.

    2002-01-01

    Supplement 23 to DICOM (Digital Imaging and Communications for Medicine), Structured Reporting, is a specification that supports a semantically rich representation of image and waveform content, enabling experts to share image and related patient information. DICOM SR supports the representation of textual and coded data linked to images and waveforms. Nevertheless, the medical information technology community needs models that work as bridges between the DICOM relational model and open object-oriented technologies. The authors assert that representations of the DICOM Structured Reporting standard, using object-oriented modeling languages such as the Unified Modeling Language, can provide a high-level reference view of the semantically rich framework of DICOM and its complex structures. They have produced an object-oriented model to represent the DICOM SR standard and have derived XML-exchangeable representations of this model using World Wide Web Consortium specifications. They expect the model to benefit developers and system architects who are interested in developing applications that are compliant with the DICOM SR specification. PMID:11751804

  3. A Representational Similarity Analysis of the Dynamics of Object Processing Using Single-Trial EEG Classification.

    PubMed

    Kaneshiro, Blair; Perreau Guimaraes, Marcos; Kim, Hyung-Suk; Norcia, Anthony M; Suppes, Patrick

    2015-01-01

    The recognition of object categories is effortlessly accomplished in everyday life, yet its neural underpinnings remain not fully understood. In this electroencephalography (EEG) study, we used single-trial classification to perform a Representational Similarity Analysis (RSA) of categorical representation of objects in human visual cortex. Brain responses were recorded while participants viewed a set of 72 photographs of objects with a planned category structure. The Representational Dissimilarity Matrix (RDM) used for RSA was derived from confusions of a linear classifier operating on single EEG trials. In contrast to past studies, which used pairwise correlation or classification to derive the RDM, we used confusion matrices from multi-class classifications, which provided novel self-similarity measures that were used to derive the overall size of the representational space. We additionally performed classifications on subsets of the brain response in order to identify spatial and temporal EEG components that best discriminated object categories and exemplars. Results from category-level classifications revealed that brain responses to images of human faces formed the most distinct category, while responses to images from the two inanimate categories formed a single category cluster. Exemplar-level classifications produced a broadly similar category structure, as well as sub-clusters corresponding to natural language categories. Spatiotemporal components of the brain response that differentiated exemplars within a category were found to differ from those implicated in differentiating between categories. Our results show that a classification approach can be successfully applied to single-trial scalp-recorded EEG to recover fine-grained object category structure, as well as to identify interpretable spatiotemporal components underlying object processing. Finally, object category can be decoded from purely temporal information recorded at single electrodes. PMID

  4. A Representational Similarity Analysis of the Dynamics of Object Processing Using Single-Trial EEG Classification

    PubMed Central

    Kaneshiro, Blair; Perreau Guimaraes, Marcos; Kim, Hyung-Suk; Norcia, Anthony M.

    2015-01-01

    The recognition of object categories is effortlessly accomplished in everyday life, yet its neural underpinnings remain not fully understood. In this electroencephalography (EEG) study, we used single-trial classification to perform a Representational Similarity Analysis (RSA) of categorical representation of objects in human visual cortex. Brain responses were recorded while participants viewed a set of 72 photographs of objects with a planned category structure. The Representational Dissimilarity Matrix (RDM) used for RSA was derived from confusions of a linear classifier operating on single EEG trials. In contrast to past studies, which used pairwise correlation or classification to derive the RDM, we used confusion matrices from multi-class classifications, which provided novel self-similarity measures that were used to derive the overall size of the representational space. We additionally performed classifications on subsets of the brain response in order to identify spatial and temporal EEG components that best discriminated object categories and exemplars. Results from category-level classifications revealed that brain responses to images of human faces formed the most distinct category, while responses to images from the two inanimate categories formed a single category cluster. Exemplar-level classifications produced a broadly similar category structure, as well as sub-clusters corresponding to natural language categories. Spatiotemporal components of the brain response that differentiated exemplars within a category were found to differ from those implicated in differentiating between categories. Our results show that a classification approach can be successfully applied to single-trial scalp-recorded EEG to recover fine-grained object category structure, as well as to identify interpretable spatiotemporal components underlying object processing. Finally, object category can be decoded from purely temporal information recorded at single electrodes. PMID

  5. System for conversion between the boundary representation model and a constructive solid geometry model of an object

    DOEpatents

    Christensen, N.C.; Emery, J.D.; Smith, M.L.

    1985-04-29

    A system converts from the boundary representation of an object to the constructive solid geometry representation thereof. The system converts the boundary representation of the object into elemental atomic geometrical units or I-bodies which are in the shape of stock primitives or regularized intersections of stock primitives. These elemental atomic geometrical units are then represented in symbolic form. The symbolic representations of the elemental atomic geometrical units are then assembled heuristically to form a constructive solid geometry representation of the object usable for manufacturing thereof. Artificial intelligence is used to determine the best constructive solid geometry representation from the boundary representation of the object. Heuristic criteria are adapted to the manufacturing environment for which the device is to be utilized. The surface finish, tolerance, and other information associated with each surface of the boundary representation of the object are mapped onto the constructive solid geometry representation of the object to produce an enhanced solid geometry representation, particularly useful for computer-aided manufacture of the object. 19 figs.

  6. System for conversion between the boundary representation model and a constructive solid geometry model of an object

    DOEpatents

    Christensen, Noel C.; Emery, James D.; Smith, Maurice L.

    1988-04-05

    A system converts from the boundary representation of an object to the constructive solid geometry representation thereof. The system converts the boundary representation of the object into elemental atomic geometrical units or I-bodies which are in the shape of stock primitives or regularized intersections of stock primitives. These elemental atomic geometrical units are then represented in symbolic form. The symbolic representations of the elemental atomic geometrical units are then assembled heuristically to form a constructive solid geometry representation of the object usable for manufacturing thereof. Artificial intelligence is used to determine the best constructive solid geometry representation from the boundary representation of the object. Heuristic criteria are adapted to the manufacturing environment for which the device is to be utilized. The surface finish, tolerance, and other information associated with each surface of the boundary representation of the object are mapped onto the constructive solid geometry representation of the object to produce an enhanced solid geometry representation, particularly useful for computer-aided manufacture of the object.

  7. REKRIATE: A Knowledge Representation System for Object Recognition and Scene Interpretation

    NASA Astrophysics Data System (ADS)

    Meystel, Alexander M.; Bhasin, Sanjay; Chen, X.

    1990-02-01

    What humans actually observe and how they comprehend this information is complex due to Gestalt processes and interaction of context in predicting the course of thinking and enforcing one idea while repressing another. How we extract the knowledge from the scene, what we get from the scene indeed and what we bring from our mechanisms of perception are areas separated by a thin, ill-defined line. The purpose of this paper is to present a system for Representing Knowledge and Recognizing and Interpreting Attention Trailed Entities dubbed as REKRIATE. It will be used as a tool for discovering the underlying principles involved in knowledge representation required for conceptual learning. REKRIATE has some inherited knowledge and is given a vocabulary which is used to form rules for identification of the object. It has various modalities of sensing and has the ability to measure the distance between the objects in the image as well as the similarity between different images of presumably the same object. All sensations received from matrix of different sensors put into an adequate form. The methodology proposed is applicable to not only the pictorial or visual world representation, but to any sensing modality. It is based upon the two premises: a) inseparability of all domains of the world representation including linguistic, as well as those formed by various sensor modalities. and b) representativity of the object at several levels of resolution simultaneously.

  8. Neural signatures for sustaining object representations attributed to others in preverbal human infants

    PubMed Central

    Kampis, Dora; Parise, Eugenio; Csibra, Gergely; Kovács, Ágnes Melinda

    2015-01-01

    A major feat of social beings is to encode what their conspecifics see, know or believe. While various non-human animals show precursors of these abilities, humans perform uniquely sophisticated inferences about other people's mental states. However, it is still unclear how these possibly human-specific capacities develop and whether preverbal infants, similarly to adults, form representations of other agents' mental states, specifically metarepresentations. We explored the neurocognitive bases of eight-month-olds' ability to encode the world from another person's perspective, using gamma-band electroencephalographic activity over the temporal lobes, an established neural signature for sustained object representation after occlusion. We observed such gamma-band activity when an object was occluded from the infants' perspective, as well as when it was occluded only from the other person (study 1), and also when subsequently the object disappeared, but the person falsely believed the object to be present (study 2). These findings suggest that the cognitive systems involved in representing the world from infants' own perspective are also recruited for encoding others' beliefs. Such results point to an early-developing, powerful apparatus suitable to deal with multiple concurrent representations, and suggest that infants can have a metarepresentational understanding of other minds even before the onset of language. PMID:26559949

  9. Neural signatures for sustaining object representations attributed to others in preverbal human infants.

    PubMed

    Kampis, Dora; Parise, Eugenio; Csibra, Gergely; Kovács, Ágnes Melinda

    2015-11-22

    A major feat of social beings is to encode what their conspecifics see, know or believe. While various non-human animals show precursors of these abilities, humans perform uniquely sophisticated inferences about other people's mental states. However, it is still unclear how these possibly human-specific capacities develop and whether preverbal infants, similarly to adults, form representations of other agents' mental states, specifically metarepresentations. We explored the neurocognitive bases of eight-month-olds' ability to encode the world from another person's perspective, using gamma-band electroencephalographic activity over the temporal lobes, an established neural signature for sustained object representation after occlusion. We observed such gamma-band activity when an object was occluded from the infants' perspective, as well as when it was occluded only from the other person (study 1), and also when subsequently the object disappeared, but the person falsely believed the object to be present (study 2). These findings suggest that the cognitive systems involved in representing the world from infants' own perspective are also recruited for encoding others' beliefs. Such results point to an early-developing, powerful apparatus suitable to deal with multiple concurrent representations, and suggest that infants can have a metarepresentational understanding of other minds even before the onset of language. PMID:26559949

  10. Computational modeling of the neural representation of object shape in the primate ventral visual system

    PubMed Central

    Eguchi, Akihiro; Mender, Bedeho M. W.; Evans, Benjamin D.; Humphreys, Glyn W.; Stringer, Simon M.

    2015-01-01

    Neurons in successive stages of the primate ventral visual pathway encode the spatial structure of visual objects. In this paper, we investigate through computer simulation how these cell firing properties may develop through unsupervised visually-guided learning. Individual neurons in the model are shown to exploit statistical regularity and temporal continuity of the visual inputs during training to learn firing properties that are similar to neurons in V4 and TEO. Neurons in V4 encode the conformation of boundary contour elements at a particular position within an object regardless of the location of the object on the retina, while neurons in TEO integrate information from multiple boundary contour elements. This representation goes beyond mere object recognition, in which neurons simply respond to the presence of a whole object, but provides an essential foundation from which the brain is subsequently able to recognize the whole object. PMID:26300766

  11. Object representations and their relationship to psychopathology and physical health status in African-American women in primary care.

    PubMed

    Porcerelli, John H; Huprich, Steven K; Binienda, Juliann; Karana, Dunia

    2006-11-01

    Object relations theories hypothesize a relationship between self/other representations and level of psychopathology. Research has lent support to this hypothesis. This study was conducted to examine the link between object representation and psychopathology, stress, physical health status, and alcohol abuse in 110 African-American women in primary care. Object representations were assessed through spontaneous descriptions of parents. Psychopathology and physical health status were assessed with the Patient Health Questionnaire and the Medical Outcomes Study Short-Form Health Survey, both of which were designed for medical settings. The results support the link between dimensions of object representations (developmental level, benevolence, punitiveness) and psychopathology and between object representations and aspects of health status. Punitive maternal and paternal representations were most robustly associated with psychopathology and health status and were the only representational variables associated with alcohol abuse. The findings provide additional support for the object representations-psychopathology link and extend the research by demonstrating associations among object representations, alcohol abuse, and health status. PMID:17102708

  12. Quality of object representations related to service utilization in a long-term residential treatment center.

    PubMed

    Fowler, J Christopher; Defife, Jared A

    2012-09-01

    The current study examines patient factors related to service utilization during intensive treatment for 66 residential patients suffering from severe mental illness. We examined the relationship among demographic, psychiatric severity, and quality of object representation variables with individual and group psychotherapy sessions attended and emergency department transfers. Hierarchical linear regression models indicate malevolent affective expectations of interpersonal relationships embedded in patient narratives is uniquely related to individual psychotherapy attendance. A three-variable model consisting of higher educational status, number of axis I/II disorders, and poor understanding of social causality was related to transfers to emergency departments owing to self-destructive behavior. Quality of object representation of self and others was uniquely related to treatment use and self-destructive behaviors. Results highlight the importance of a comprehensive multimodal evaluation for improving treatment preparation, planning, and intervention. Clinical implications are considered. PMID:22962978

  13. Representations and antinomies: rural and city social objects in a Brazilian peasant community.

    PubMed

    Bonomo, Mariana; de Souza, Lídio; Trindade, Zeidi Araujo; Menandro, Maria Cristina Smith

    2013-01-01

    The present work is part of a series of studies that primarily focus on social representations of rural and city objects in the process of constructing a social identity of the countryside. Using social representation theory, this study aimed to investigate the representational field linked to the rural and city objects for the members of a peasant community. A total of 200 members of a Brazilian rural community from four generational groups, of both sexes and aged between 7 and 81 years, participated in this study. We conducted individual interviews with semi-structured scripts. The data corpora, processed using EVOC software, consisted of free associations of the rural and city inductor terms. In constitutive terms, the results allow for the identification of antinomies between the objects discussed; in functional terms, they indicate that the process of constructing social identity is based on the symbolic field, which acts as a reference system for the preparation of the rural identity shared by the participants. PMID:24230933

  14. Evidence for spatial representation of object shape by echolocating bats (Eptesicus fuscus)

    PubMed Central

    DeLong, Caroline M.; Bragg, Rebecca; Simmons, James A.

    2008-01-01

    Big brown bats were trained in a two-choice task to locate a two-cylinder dipole object with a constant 5 cm spacing in the presence of either a one-cylinder monopole or another two-cylinder dipole with a shorter spacing. For the dipole versus monopole task, the objects were either stationary or in motion during each trial. The dipole and monopole objects varied from trial to trial in the left-right position while also roving in range (10–40 cm), cross range separation (15–40 cm), and dipole aspect angle (0°–90°). These manipulations prevented any single feature of the acoustic stimuli from being a stable indicator of which object was the correct choice. After accounting for effects of masking between echoes from pairs of cylinders at similar distances, the bats discriminated the 5 cm dipole from both the monopole and dipole alternatives with performance independent of aspect angle, implying a distal, spatial object representation rather than a proximal, acoustic object representation. PMID:18537406

  15. Making the Invisible Visible: Enhancing Students' Conceptual Understanding by Introducing Representations of Abstract Objects in a Simulation

    ERIC Educational Resources Information Center

    Olympiou, Georgios; Zacharias, Zacharia; deJong, Ton

    2013-01-01

    This study aimed to identify if complementing representations of concrete objects with representations of abstract objects improves students' conceptual understanding as they use a simulation to experiment in the domain of "Light and Color". Moreover, we investigated whether students' prior knowledge is a factor that must be considered in deciding…

  16. Multi-class remote sensing object recognition based on discriminative sparse representation.

    PubMed

    Wang, Xin; Shen, Siqiu; Ning, Chen; Huang, Fengchen; Gao, Hongmin

    2016-02-20

    The automatic recognition of multi-class objects with various backgrounds is a big challenge in the field of remote sensing (RS) image analysis. In this paper, we propose a novel recognition framework for multi-class RS objects based on the discriminative sparse representation. In this framework, the recognition problem is implemented in two stages. In the first, or discriminative dictionary learning stage, considering the characterization of remote sensing objects, the scale-invariant feature transform descriptor is first combined with an improved bag-of-words model for multi-class objects feature extraction and representation. Then, information about each class of training samples is fused into the dictionary learning process; by using the K-singular value decomposition algorithm, a discriminative dictionary can be learned for sparse coding. In the second, or recognition, stage, to improve the computational efficiency, the phase spectrum of a quaternion Fourier transform model is applied to the test image to predict a small set of object candidate locations. Then, a multi-scale sliding window mechanism is utilized to scan the image over those candidate locations to obtain the object candidates (or objects of interest). Subsequently, the sparse coding coefficients of these candidates under the discriminative dictionary are mapped to the discriminative vectors that have a good ability to distinguish different classes of objects. Finally, multi-class object recognition can be accomplished by analyzing these vectors. The experimental results show that the proposed work outperforms a number of state-of-the-art methods for multi-class remote sensing object recognition. PMID:26906591

  17. Attention enhances multi-voxel representation of novel objects in frontal, parietal and visual cortices.

    PubMed

    Woolgar, Alexandra; Williams, Mark A; Rich, Anina N

    2015-04-01

    Selective attention is fundamental for human activity, but the details of its neural implementation remain elusive. One influential theory, the adaptive coding hypothesis (Duncan, 2001, An adaptive coding model of neural function in prefrontal cortex, Nature Reviews Neuroscience 2:820-829), proposes that single neurons in certain frontal and parietal regions dynamically adjust their responses to selectively encode relevant information. This selective representation may in turn support selective processing in more specialized brain regions such as the visual cortices. Here, we use multi-voxel decoding of functional magnetic resonance images to demonstrate selective representation of attended--and not distractor--objects in frontal, parietal, and visual cortices. In addition, we highlight a critical role for task demands in determining which brain regions exhibit selective coding. Strikingly, representation of attended objects in frontoparietal cortex was highest under conditions of high perceptual demand, when stimuli were hard to perceive and coding in early visual cortex was weak. Coding in early visual cortex varied as a function of attention and perceptual demand, while coding in higher visual areas was sensitive to the allocation of attention but robust to changes in perceptual difficulty. Consistent with high-profile reports, peripherally presented objects could also be decoded from activity at the occipital pole, a region which corresponds to the fovea. Our results emphasize the flexibility of frontoparietal and visual systems. They support the hypothesis that attention enhances the multi-voxel representation of information in the brain, and suggest that the engagement of this attentional mechanism depends critically on current task demands. PMID:25583612

  18. The Representation of Objects in Apraxia: From Action Execution to Error Awareness

    PubMed Central

    Canzano, Loredana; Scandola, Michele; Gobbetto, Valeria; Moretto, Giuseppe; D’Imperio, Daniela; Moro, Valentina

    2016-01-01

    Apraxia is a well-known syndrome characterized by the sufferer’s inability to perform routine gestures. In an attempt to understand the syndrome better, various different theories have been developed and a number of classifications of different subtypes have been proposed. In this article review, we will address these theories with a specific focus on how the use of objects helps us to better understand upper limb apraxia. With this aim, we will consider transitive vs. intransitive action dissociation as well as less frequent types of apraxia involving objects, i.e., constructive apraxia and magnetic apraxia. Pantomime and the imitation of objects in use are also considered with a view to dissociating the various different components involved in upper limb apraxia. Finally, we discuss the evidence relating to action recognition and awareness of errors in the execution of actions. Various different components concerning the use of objects emerge from our analysis and the results show that knowledge of an object and sensory-motor representations are supported by other functions such as spatial and body representations, executive functions and monitoring systems. PMID:26903843

  19. How category learning affects object representations: Not all morphspaces stretch alike

    PubMed Central

    Folstein, Jonathan R.; Gauthier, Isabel; Palmeri, Thomas J.

    2012-01-01

    How does learning to categorize objects affect how we visually perceive them? Behavioral, neurophysiological, and neuroimaging studies have tested the degree to which category learning influences object representations, with conflicting results. Some studies find that objects become more visually discriminable along dimensions relevant to previously learned categories, while others find no such effect. One critical factor we explore here lies in the structure of the morphspaces used in different studies. Studies finding no increase in discriminability often use “blended” morphspaces, with morphparents lying at corners of the space. By contrast, studies finding increases in discriminability use “factorial” morphspaces, defined by separate morphlines forming axes of the space. Using the same four morphparents, we created both factorial and blended morphspaces matched in pairwise discriminability. Category learning caused a selective increase in discriminability along the relevant dimension of the factorial space, but not in the blended space, and led to the creation of functional dimensions in the factorial space, but not in the blended space. These findings demonstrate that not all morphspaces stretch alike: Only some morphspaces support enhanced discriminability to relevant object dimensions following category learning. Our results have important implications for interpreting neuroimaging studies reporting little or no effect of category learning on object representations in the visual system: Those studies may have been limited by their use of blended morphspaces. PMID:22746950

  20. A framework for inverse planning of beam-on times for 3D small animal radiotherapy using interactive multi-objective optimisation

    NASA Astrophysics Data System (ADS)

    Balvert, Marleen; van Hoof, Stefan J.; Granton, Patrick V.; Trani, Daniela; den Hertog, Dick; Hoffmann, Aswin L.; Verhaegen, Frank

    2015-07-01

    Advances in precision small animal radiotherapy hardware enable the delivery of increasingly complicated dose distributions on the millimeter scale. Manual creation and evaluation of treatment plans becomes difficult or even infeasible with an increasing number of degrees of freedom for dose delivery and available image data. The goal of this work is to develop an optimisation model that determines beam-on times for a given beam configuration, and to assess the feasibility and benefits of an automated treatment planning system for small animal radiotherapy. The developed model determines a Pareto optimal solution using operator-defined weights for a multiple-objective treatment planning problem. An interactive approach allows the planner to navigate towards, and to select the Pareto optimal treatment plan that yields the most preferred trade-off of the conflicting objectives. This model was evaluated using four small animal cases based on cone-beam computed tomography images. Resulting treatment plan quality was compared to the quality of manually optimised treatment plans using dose-volume histograms and metrics. Results show that the developed framework is well capable of optimising beam-on times for 3D dose distributions and offers several advantages over manual treatment plan optimisation. For all cases but the simple flank tumour case, a similar amount of time was needed for manual and automated beam-on time optimisation. In this time frame, manual optimisation generates a single treatment plan, while the inverse planning system yields a set of Pareto optimal solutions which provides quantitative insight on the sensitivity of conflicting objectives. Treatment planning automation decreases the dependence on operator experience and allows for the use of class solutions for similar treatment scenarios. This can shorten the time required for treatment planning and therefore increase animal throughput. In addition, this can improve treatment standardisation and

  1. A framework for inverse planning of beam-on times for 3D small animal radiotherapy using interactive multi-objective optimisation.

    PubMed

    Balvert, Marleen; van Hoof, Stefan J; Granton, Patrick V; Trani, Daniela; den Hertog, Dick; Hoffmann, Aswin L; Verhaegen, Frank

    2015-07-21

    Advances in precision small animal radiotherapy hardware enable the delivery of increasingly complicated dose distributions on the millimeter scale. Manual creation and evaluation of treatment plans becomes difficult or even infeasible with an increasing number of degrees of freedom for dose delivery and available image data. The goal of this work is to develop an optimisation model that determines beam-on times for a given beam configuration, and to assess the feasibility and benefits of an automated treatment planning system for small animal radiotherapy. The developed model determines a Pareto optimal solution using operator-defined weights for a multiple-objective treatment planning problem. An interactive approach allows the planner to navigate towards, and to select the Pareto optimal treatment plan that yields the most preferred trade-off of the conflicting objectives. This model was evaluated using four small animal cases based on cone-beam computed tomography images. Resulting treatment plan quality was compared to the quality of manually optimised treatment plans using dose-volume histograms and metrics. Results show that the developed framework is well capable of optimising beam-on times for 3D dose distributions and offers several advantages over manual treatment plan optimisation. For all cases but the simple flank tumour case, a similar amount of time was needed for manual and automated beam-on time optimisation. In this time frame, manual optimisation generates a single treatment plan, while the inverse planning system yields a set of Pareto optimal solutions which provides quantitative insight on the sensitivity of conflicting objectives. Treatment planning automation decreases the dependence on operator experience and allows for the use of class solutions for similar treatment scenarios. This can shorten the time required for treatment planning and therefore increase animal throughput. In addition, this can improve treatment standardisation and

  2. 3D Model Generation From the Engineering Drawing

    NASA Astrophysics Data System (ADS)

    Vaský, Jozef; Eliáš, Michal; Bezák, Pavol; Červeňanská, Zuzana; Izakovič, Ladislav

    2010-01-01

    The contribution deals with the transformation of engineering drawings in a paper form into a 3D computer representation. A 3D computer model can be further processed in CAD/CAM system, it can be modified, archived, and a technical drawing can be then generated from it as well. The transformation process from paper form to the data one is a complex and difficult one, particularly owing to the different types of drawings, forms of displayed objects and encountered errors and deviations from technical standards. The algorithm for 3D model generating from an orthogonal vector input representing a simplified technical drawing of the rotational part is described in this contribution. The algorithm was experimentally implemented as ObjectARX application in the AutoCAD system and the test sample as the representation of the rotational part was used for verificaton.

  3. Grammatical Gender and Mental Representation of Object: The Case of Musical Instruments.

    PubMed

    Vuksanović, Jasmina; Bjekić, Jovana; Radivojević, Natalija

    2015-08-01

    A body of research shows that grammatical gender, although an arbitrary category, is viewed as the system with its own meaning. However, the question remains to what extent does grammatical gender influence shaping our notions about objects when both verbal and visual information are available. Two experiments were conducted. The results obtained in Experiment 1 have shown that grammatical gender as a linguistic property of the pseudo-nouns used as names for musical instruments significantly affects people's representations about these instruments. The purpose of Experiment 2 was to examine how the representation of musical instruments will be shaped in the presence of both language and visual information. The results indicate that the co-existence of linguistic and visual information results in formation of concepts about selected instruments by all available information from both sources, thus suggesting that grammatical gender influences nonverbal concepts' forming, but has no privileged status in the matter. PMID:24595378

  4. Eye fixation during multiple object attention is based on a representation of discrete spatial foci.

    PubMed

    Fluharty, Meg; Jentzsch, Ines; Spitschan, Manuel; Vishwanath, Dhanraj

    2016-01-01

    We often look at and attend to several objects at once. How the brain determines where to point our eyes when we do this is poorly understood. Here we devised a novel paradigm to discriminate between different models of spatial selection guiding fixation. In contrast to standard static attentional tasks where the eye remains fixed at a predefined location, observers selected their own preferred fixation position while they tracked static targets that were arranged in specific geometric configurations and which changed identity over time. Fixations were best predicted by a representation of discrete spatial foci, not a polygonal grouping, simple 2-foci division of attention or a circular spotlight. Moreover, attentional performance was incompatible with serial selection. Together with previous studies, our findings are compatible with a view that attentional selection and fixation rely on shared spatial representations and suggest a more nuanced definition of overt vs. covert attention. PMID:27561413

  5. Eye fixation during multiple object attention is based on a representation of discrete spatial foci

    PubMed Central

    Fluharty, Meg; Jentzsch, Ines; Spitschan, Manuel; Vishwanath, Dhanraj

    2016-01-01

    We often look at and attend to several objects at once. How the brain determines where to point our eyes when we do this is poorly understood. Here we devised a novel paradigm to discriminate between different models of spatial selection guiding fixation. In contrast to standard static attentional tasks where the eye remains fixed at a predefined location, observers selected their own preferred fixation position while they tracked static targets that were arranged in specific geometric configurations and which changed identity over time. Fixations were best predicted by a representation of discrete spatial foci, not a polygonal grouping, simple 2-foci division of attention or a circular spotlight. Moreover, attentional performance was incompatible with serial selection. Together with previous studies, our findings are compatible with a view that attentional selection and fixation rely on shared spatial representations and suggest a more nuanced definition of overt vs. covert attention. PMID:27561413

  6. An object-based compression system for a class of dynamic image-based representations

    NASA Astrophysics Data System (ADS)

    Wu, Qing; Ng, King-To; Chan, Shing-Chow; Shum, Heung-Yeung

    2005-07-01

    This paper proposes a new object-based compression system for a class of dynamic image-based representations called plenoptic videos (PVs). PVs are simplified dynamic light fields, where the videos are taken at regularly spaced locations along line segments instead of a 2-D plane. The proposed system employs an object-based approach, where objects at different depth values are segmented to improve the rendering quality as in the pop-up light fields. Furthermore, by coding the plenoptic video at the object level, desirable functionalities such as scalability of contents, error resilience, and interactivity with individual IBR objects can be achieved. Besides supporting the coding of the texture and binary shape maps for IBR objects with arbitrary shapes, the proposed system also supports the coding of gray-scale alpha maps as well as geometry information in the form of depth maps to respectively facilitate the matting and rendering of the IBR objects. To improve the coding performance, the proposed compression system exploits both the temporal redundancy and spatial redundancy among the video object streams in the PV by employing disparity-compensated prediction or spatial prediction in its texture, shape and depth coding processes. To demonstrate the principle and effectiveness of the proposed system, a multiple video camera system was built and experimental results show that considerable improvements in coding performance are obtained for both synthetic scene and real scene, while supporting the stated object-based functionalities.

  7. Improvement and characterization of the adhesion of electrospun PLDLA nanofibers on PLDLA-based 3D object substrates for orthopedic application.

    PubMed

    Wimpenny, I; Lahteenkorva, K; Suokas, E; Ashammakhi, N; Yang, Y

    2012-01-01

    Intensive research has demonstrated the clear biological potential of electrospun nanofibers for tissue regeneration and repair. However, nanofibers alone have limited mechanical properties. In this study we took poly(L-lactide-co-D-lactide) (PLDLA)-based 3D objects, one existing medical device (interference screws) and one medical device model (discs) as examples to form composites through coating their surface with electrospun PLDLA nanofibers. We specifically investigated the effects of electrospinning parameters on the improvement of adhesion of the electrospun nanofibers to the PLDLA-based substrates. To reveal the adhesion mechanisms, a novel peel test protocol was developed for the characterization of the adhesion and delamination phenomenon of the nanofibers deposited to substrates. The effect of incubation of the composites under physiological conditions on the adhesion of the nanofibers has also been studied. It was revealed that reduction of the working distance to 10 cm resulted in deposition of residual solvent during electrospinning of nanofibers onto the substrate, causing fiber-fiber bonding. Delamination of this coating occurred between the whole nanofiber layer and substrate, at low stress. Fibers deposited at 15 cm working distance were of smaller diameter and no residual solvent was observed during deposition. Delamination occurred between nanofiber layers, which peeled off under greater stress. This study represents a novel method for the alteration of nanofiber adhesion to substrates, and quantification of the change in the adhesion state, which has potential applications to develop better medical devices for orthopedic tissue repair and regeneration. PMID:21943952

  8. fMRI-adaptation evidence of overlapping neural representations for objects related in function or manipulation.

    PubMed

    Yee, Eiling; Drucker, Daniel M; Thompson-Schill, Sharon L

    2010-04-01

    Sensorimotor-based theories of semantic memory contend that semantic information about an object is represented in the neural substrate invoked when we perceive or interact with it. We used fMRI adaptation to test this prediction, measuring brain activation as participants read pairs of words. Pairs shared function (flashlight-lantern), shape (marble-grape), both (pencil-pen), were unrelated (saucer-needle), or were identical (drill-drill). We observed adaptation for pairs with both function and shape similarity in left premotor cortex. Further, degree of function similarity was correlated with adaptation in three regions: two in the left temporal lobe (left medial temporal lobe, left middle temporal gyrus), which has been hypothesized to play a role in mutimodal integration, and one in left superior frontal gyrus. We also found that degree of manipulation (i.e., action) and function similarity were both correlated with adaptation in two regions: left premotor cortex and left intraparietal sulcus (involved in guiding actions). Additional considerations suggest that the adaptation in these two regions was driven by manipulation similarity alone; thus, these results imply that manipulation information about objects is encoded in brain regions involved in performing or guiding actions. Unexpectedly, these same two regions showed increased activation (rather than adaptation) for objects similar in shape. Overall, we found evidence (in the form of adaptation) that objects that share semantic features have overlapping representations. Further, the particular regions of overlap provide support for the existence of both sensorimotor and amodal/multimodal representations. PMID:20034582

  9. fMRI-Adaptation Evidence of Overlapping Neural Representations for Objects Related in Function or Manipulation

    PubMed Central

    Yee, Eiling; Drucker, Daniel M.; Thompson-Schill, Sharon L.

    2010-01-01

    Sensorimotor-based theories of semantic memory contend that semantic information about an object is represented in the neural substrate invoked when we perceive or interact with it. We used fMRI adaptation to test this prediction, measuring brain activation as participants read pairs of words. Pairs shared function (flashlight–lantern), shape (marble–grape), both (pencil–pen), were unrelated (saucer–needle), or were identical (drill–drill). We observed adaptation for pairs with both function and shape similarity in left premotor cortex. Further, degree of function similarity was correlated with adaptation in three regions: two in the left temporal lobe (left medial temporal lobe, left middle temporal gyrus), which has been hypothesized to play a role in mutimodal integration, and one in left superior frontal gyrus. We also found that degree of manipulation (i.e., action) and function similarity were both correlated with adaptation in two regions: left premotor cortex and left intraparietal sulcus (involved in guiding actions). Additional considerations suggest that the adaptation in these two regions was driven by manipulation similarity alone; thus, these results imply that manipulation information about objects is encoded in brain regions involved in performing or guiding actions. Unexpectedly, these same two regions showed increased activation (rather than adaptation) for objects similar in shape. Overall, we found evidence (in the form of adaptation) that objects that share semantic features have overlapping representations. Further, the particular regions of overlap provide support for the existence of both sensorimotor and amodal/multimodal representations. PMID:20034582

  10. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis. Revision 1.12

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1997-01-01

    We proposed a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and is required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has two important applications, which we term the assessment application and the objective analysis application. For the assessment application, our approach results in new objective measures of forecast skill which are more in line with subjective measures of forecast skill and which are useful in validating models and diagnosing their shortcomings. With regard to the objective analysis application, meteorological analysis schemes balance forecast error and observational error to obtain an optimal analysis. Presently, representations of the error covariance matrix used to measure the forecast error are severely limited. For the objective analysis application our approach will improve analyses by providing a more realistic measure of the forecast error. We expect, a priori, that our approach should greatly improve the utility of remotely sensed data which have relatively high horizontal resolution, but which are indirectly related to the conventional atmospheric variables. In this project, we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP) and 500 hPa geopotential height fields for forecasts of the short and medium range. Since the forecasts are generated by the GEOS (Goddard Earth Observing System) data assimilation system with and without ERS 1 scatterometer data, these preliminary studies serve several purposes. They (1) provide a

  11. 3D representation of geochemical data, the corresponding alteration and associated REE mobility at the Ranger uranium deposit, Northern Territory, Australia

    NASA Astrophysics Data System (ADS)

    Fisher, Louise A.; Cleverley, James S.; Pownceby, Mark; MacRae, Colin

    2013-12-01

    Interrogation and 3D visualisation of multiple multi-element data sets collected at the Ranger 1 No. 3 uranium mine, in the Northern Territory of Australia, show a distinct and large-scale chemical zonation around the ore body. A central zone of Mg alteration, dominated by extensive clinochlore alteration, overprints a biotite-muscovite-K-feldspar assemblage which shows increasing loss of Na, Ba and Ca moving towards the ore body. Manipulation of pre-existing geochemical data and integration of new data collected from targeted `niche' samples make it possible to recognise chemical architecture within the system and identify potential fluid conduits. New trace element and rare earth element (REE) data show strong fractionation associated with the zoned alteration around the deposit and with fault planes that intersect and bound the deposit. Within the most altered portion of the system, isocon analysis indicates addition of elements including Mg, S, Cu, Au and Ni and removal of elements including Ca, K, Ba and Na within a zone of damage associated with ore precipitation. In the more distal parts of the system, processes of alteration and replacement associated with the mineralising system can be recognised. REE element data show enrichment in HREE centred about a characteristic peak in Dy in the high-grade ore zone while LREEs are enriched in the outermost portions of the system. The patterns recognised in 3D in zoning of geochemical groups and contoured S, K and Mg abundance and the observed REE patterns suggest a fluid flow regime in which fluids were predominately migrating upwards during ore deposition within the core of the ore system.

  12. Using the Flow-3D General Moving Object Model to Simulate Coupled Liquid Slosh - Container Dynamics on the SPHERES Slosh Experiment: Aboard the International Space Station

    NASA Technical Reports Server (NTRS)

    Schulman, Richard; Kirk, Daniel; Marsell, Brandon; Roth, Jacob; Schallhorn, Paul

    2013-01-01

    The SPHERES Slosh Experiment (SSE) is a free floating experimental platform developed for the acquisition of long duration liquid slosh data aboard the International Space Station (ISS). The data sets collected will be used to benchmark numerical models to aid in the design of rocket and spacecraft propulsion systems. Utilizing two SPHERES Satellites, the experiment will be moved through different maneuvers designed to induce liquid slosh in the experiment's internal tank. The SSE has a total of twenty-four thrusters to move the experiment. In order to design slosh generating maneuvers, a parametric study with three maneuvers types was conducted using the General Moving Object (GMO) model in Flow-30. The three types of maneuvers are a translation maneuver, a rotation maneuver and a combined rotation translation maneuver. The effectiveness of each maneuver to generate slosh is determined by the deviation of the experiment's trajectory as compared to a dry mass trajectory. To fully capture the effect of liquid re-distribution on experiment trajectory, each thruster is modeled as an independent force point in the Flow-3D simulation. This is accomplished by modifying the total number of independent forces in the GMO model from the standard five to twenty-four. Results demonstrate that the most effective slosh generating maneuvers for all motions occurs when SSE thrusters are producing the highest changes in SSE acceleration. The results also demonstrate that several centimeters of trajectory deviation between the dry and slosh cases occur during the maneuvers; while these deviations seem small, they are measureable by SSE instrumentation.

  13. Sensing and compressing 3-D models

    SciTech Connect

    Krumm, J.

    1998-02-01

    The goal of this research project was to create a passive and robust computer vision system for producing 3-D computer models of arbitrary scenes. Although the authors were unsuccessful in achieving the overall goal, several components of this research have shown significant potential. Of particular interest is the application of parametric eigenspace methods for planar pose measurement of partially occluded objects in gray-level images. The techniques presented provide a simple, accurate, and robust solution to the planar pose measurement problem. In addition, the representational efficiency of eigenspace methods used with gray-level features were successfully extended to binary features, which are less sensitive to illumination changes. The results of this research are presented in two papers that were written during the course of this project. The papers are included in sections 2 and 3. The first section of this report summarizes the 3-D modeling efforts.

  14. 3D field harmonics

    SciTech Connect

    Caspi, S.; Helm, M.; Laslett, L.J.

    1991-03-30

    We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.

  15. Local object patterns for the representation and classification of colon tissue images.

    PubMed

    Olgun, Gulden; Sokmensuer, Cenk; Gunduz-Demir, Cigdem

    2014-07-01

    This paper presents a new approach for the effective representation and classification of images of histopathological colon tissues stained with hematoxylin and eosin. In this approach, we propose to decompose a tissue image into its histological components and introduce a set of new texture descriptors, which we call local object patterns, on these components to model their composition within a tissue. We define these descriptors using the idea of local binary patterns, which quantify a pixel by constructing a binary string based on relative intensities of its neighbors. However, as opposed to pixel-level local binary patterns, we define our local object pattern descriptors at the component level to quantify a component. To this end, we specify neighborhoods with different locality ranges and encode spatial arrangements of the components within the specified local neighborhoods by generating strings. We then extract our texture descriptors from these strings to characterize histological components and construct the bag-of-words representation of an image from the characterized components. Working on microscopic images of colon tissues, our experiments reveal that the use of these component-level texture descriptors results in higher classification accuracies than the previous textural approaches. PMID:24043411

  16. An analysis of TA-Student Interaction and the Development of Concepts in 3-d Space Through Language, Objects, and Gesture in a College-level Geoscience Laboratory

    NASA Astrophysics Data System (ADS)

    King, S. L.

    2015-12-01

    The purpose of this study is twofold: 1) to describe how a teaching assistant (TA) in an undergraduate geology laboratory employs a multimodal system in order to mediate the students' understanding of scientific knowledge and develop a contextualization of a concept in three-dimensional space and 2) to describe how a linguistic awareness of gestural patterns can be used to inform TA training assessment of students' conceptual understanding in situ. During the study the TA aided students in developing the conceptual understanding and reconstruction of a meteoric impact, which produces shatter cone formations. The concurrent use of speech, gesture, and physical manipulation of objects is employed by the TA in order to aid the conceptual understanding of this particular phenomenon. Using the methods of gestural analysis in works by Goldin-Meadow, 2000 and McNeill, 1992, this study describes the gestures of the TA and the students as well as the purpose and motivation of the meditational strategies employed by TA in order to build the geological concept in the constructed 3-dimensional space. Through a series of increasingly complex gestures, the TA assists the students to construct the forensic concept of the imagined 3-D space, which can then be applied to a larger context. As the TA becomes more familiar with the students' meditational needs, the TA adapts teaching and gestural styles to meet their respective ZPDs (Vygotsky 1978). This study shows that in the laboratory setting language, gesture, and physical manipulation of the experimental object are all integral to the learning and demonstration of scientific concepts. Recognition of the gestural patterns of the students allows the TA the ability to dynamically assess the students understanding of a concept. Using the information from this example of student-TA interaction, a brief short course has been created to assist TAs in recognizing the mediational power as well as the assessment potential of gestural

  17. Multitask joint spatial pyramid matching using sparse representation with dynamic coefficients for object recognition

    NASA Astrophysics Data System (ADS)

    Hajigholam, Mohammad-Hossein; Raie, Abolghasem-Asadollah; Faez, Karim

    2016-03-01

    Object recognition is considered a necessary part in many computer vision applications. Recently, sparse coding methods, based on representing a sparse feature from an image, show remarkable results on several object recognition benchmarks, but the precision obtained by these methods is not yet sufficient. Such a problem arises where there are few training images available. As such, using multiple features and multitask dictionaries appears to be crucial to achieving better results. We use multitask joint sparse representation, using dynamic coefficients to connect these sparse features. In other words, we calculate the importance of each feature for each class separately. This causes the features to be used efficiently and appropriately for each class. Thus, we use variance of features and particle swarm optimization algorithms to obtain these dynamic coefficients. Experimental results of our work on Caltech-101 and Caltech-256 databases show more accuracy compared with state-of-the art ones on the same databases.

  18. Object tracking for a class of dynamic image-based representations

    NASA Astrophysics Data System (ADS)

    Gan, Zhi-Feng; Chan, Shing-Chow; Ng, King-To; Shum, Heung-Yeung

    2005-07-01

    Image-based rendering (IBR) is an emerging technology for photo-realistic rendering of scenes from a collection of densely sampled images and videos. Recently, an object-based approach for rendering and the compression of a class of dynamic image-based representations called plenoptic videos was proposed. The plenoptic video is a simplified dynamic light field, which is obtained by capturing videos at regularly locations along a series of line segments. In the object-based approach, objects at large depth differences are segmented into layers for rendering and compression. The rendering quality in large environment can be significantly improved, as demonstrated by the pop-up lightfields. In addition, by coding the plenoptic video at the object level, desirable functionalities such as scalability of contents, error resilience, and interactivity with individual IBR objects, can be achieved. An important step in the object-based approach is to segment the objects in the video streams into layers or image-based objects, which is largely done by semi-automatic technique. To reduce the segmentation time for segmenting plenoptic videos, efficient tracking techniques are highly desirable. This paper proposes a new automatic object tracking method based on the level-set method. Our method, which utilizes both local and global features of the image sequences instead of global features exploited in previous approach, can achieve better tracking results for objects, especially with non-uniform energy distribution. Due to possible segmentation errors around object boundaries, natural matting with Bayesian approach is also incorporated into our system. Using the alpha map and texture so estimated, it is very convenient to composite the image-based objects onto the background of the original or other plenoptic videos. Furthermore, a MPEG-4 like object-based algorithm is developed for compressing the plenoptic videos, which consist of the alpha maps, depth maps and textures of the

  19. The Structure of Three-Dimensional Object Representations in Human Vision: Evidence from Whole-Part Matching

    ERIC Educational Resources Information Center

    Leek, E. Charles; Reppa, Irene; Arguin, Martin

    2005-01-01

    This article examines how the human visual system represents the shapes of 3-dimensional (3D) objects. One long-standing hypothesis is that object shapes are represented in terms of volumetric component parts and their spatial configuration. This hypothesis is examined in 3 experiments using a whole-part matching paradigm in which participants…

  20. An efficient memetic algorithm for 3D shape matching problems

    NASA Astrophysics Data System (ADS)

    Sharif Khan, Mohammad; Mohamad Ayob, Ahmad F.; Ray, Tapabrata

    2014-05-01

    Shape representation plays a vital role in any shape optimization exercise. The ability to identify a shape with good functional properties is dependent on the underlying shape representation scheme, the morphing mechanism and the efficiency of the optimization algorithm. This article presents a novel and efficient methodology for morphing 3D shapes via smart repair of control points. The repaired sequence of control points are subsequently used to define the 3D object using a B-spline surface representation. The control points are evolved within the framework of a memetic algorithm for greater efficiency. While the authors have already proposed an approach for 2D shape matching, this article extends it further to deal with 3D shape matching problems. Three 3D examples and a real customized 3D earplug design have been used as examples to illustrate the performance of the proposed approach and the effectiveness of the repair scheme. Complete details of the problems are presented for future work in this direction.

  1. The use of a low-cost visible light 3D scanner to create virtual reality environment models of actors and objects

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2015-05-01

    A low-cost 3D scanner has been developed with a parts cost of approximately USD $5,000. This scanner uses visible light sensing to capture both structural as well as texture and color data of a subject. This paper discusses the use of this type of scanner to create 3D models for incorporation into a virtual reality environment. It describes the basic scanning process (which takes under a minute for a single scan), which can be repeated to collect multiple positions, if needed for actor model creation. The efficacy of visible light versus other scanner types is also discussed.

  2. Planning 3-D collision-free paths using spheres

    NASA Technical Reports Server (NTRS)

    Bonner, Susan; Kelley, Robert B.

    1989-01-01

    A scheme for the representation of objects, the Successive Spherical Approximation (SSA), facilitates the rapid planning of collision-free paths in a 3-D, dynamic environment. The hierarchical nature of the SSA allows collision-free paths to be determined efficiently while still providing for the exact representation of dynamic objects. The concept of a freespace cell is introduced to allow human 3-D conceptual knowledge to be used in facilitating satisfying choices for paths. Collisions can be detected at a rate better than 1 second per environment object per path. This speed enables the path planning process to apply a hierarchy of rules to create a heuristically satisfying collision-free path.

  3. Improving 3d Spatial Queries Search: Newfangled Technique of Space Filling Curves in 3d City Modeling

    NASA Astrophysics Data System (ADS)

    Uznir, U.; Anton, F.; Suhaibah, A.; Rahman, A. A.; Mioc, D.

    2013-09-01

    The advantages of three dimensional (3D) city models can be seen in various applications including photogrammetry, urban and regional planning, computer games, etc.. They expand the visualization and analysis capabilities of Geographic Information Systems on cities, and they can be developed using web standards. However, these 3D city models consume much more storage compared to two dimensional (2D) spatial data. They involve extra geometrical and topological information together with semantic data. Without a proper spatial data clustering method and its corresponding spatial data access method, retrieving portions of and especially searching these 3D city models, will not be done optimally. Even though current developments are based on an open data model allotted by the Open Geospatial Consortium (OGC) called CityGML, its XML-based structure makes it challenging to cluster the 3D urban objects. In this research, we propose an opponent data constellation technique of space-filling curves (3D Hilbert curves) for 3D city model data representation. Unlike previous methods, that try to project 3D or n-dimensional data down to 2D or 3D using Principal Component Analysis (PCA) or Hilbert mappings, in this research, we extend the Hilbert space-filling curve to one higher dimension for 3D city model data implementations. The query performance was tested using a CityGML dataset of 1,000 building blocks and the results are presented in this paper. The advantages of implementing space-filling curves in 3D city modeling will improve data retrieval time by means of optimized 3D adjacency, nearest neighbor information and 3D indexing. The Hilbert mapping, which maps a subinterval of the [0, 1] interval to the corresponding portion of the d-dimensional Hilbert's curve, preserves the Lebesgue measure and is Lipschitz continuous. Depending on the applications, several alternatives are possible in order to cluster spatial data together in the third dimension compared to its

  4. Is a bear white in the woods? Parallel representation of implied object color during language comprehension.

    PubMed

    Connell, Louise; Lynott, Dermot

    2009-06-01

    Color is undeniably important to object representations, but so too is the ability of context to alter the color of an object. The present study examined how implied perceptual information about typical and atypical colors is represented during language comprehension. Participants read sentences that implied a (typical or atypical) color for a target object and then performed a modified Stroop task in which they named the ink color of the target word (typical, atypical, or unrelated). Results showed that color naming was facilitated both when ink color was typical for that object (e.g., bear in brown ink) and when it matched the color implied by the previous sentence (e.g., bear in white ink following Joe was excited to see a bear at the North Pole). These findings suggest that unusual contexts cause people to represent in parallel both typical and scenario-specific perceptual information, and these types of information are discussed in relation to the specialization of perceptual simulations. PMID:19451387

  5. Three Dimensional (3D) Printing: A Straightforward, User-Friendly Protocol to Convert Virtual Chemical Models to Real-Life Objects

    ERIC Educational Resources Information Center

    Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel

    2015-01-01

    A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…

  6. You shall know an object by the company it keeps: An investigation of semantic representations derived from object co-occurrence in visual scenes.

    PubMed

    Sadeghi, Zahra; McClelland, James L; Hoffman, Paul

    2015-09-01

    An influential position in lexical semantics holds that semantic representations for words can be derived through analysis of patterns of lexical co-occurrence in large language corpora. Firth (1957) famously summarised this principle as "you shall know a word by the company it keeps". We explored whether the same principle could be applied to non-verbal patterns of object co-occurrence in natural scenes. We performed latent semantic analysis (LSA) on a set of photographed scenes in which all of the objects present had been manually labelled. This resulted in a representation of objects in a high-dimensional space in which similarity between two objects indicated the degree to which they appeared in similar scenes. These representations revealed similarities among objects belonging to the same taxonomic category (e.g., items of clothing) as well as cross-category associations (e.g., between fruits and kitchen utensils). We also compared representations generated from this scene dataset with two established methods for elucidating semantic representations: (a) a published database of semantic features generated verbally by participants and (b) LSA applied to a linguistic corpus in the usual fashion. Statistical comparisons of the three methods indicated significant association between the structures revealed by each method, with the scene dataset displaying greater convergence with feature-based representations than did LSA applied to linguistic data. The results indicate that information about the conceptual significance of objects can be extracted from their patterns of co-occurrence in natural environments, opening the possibility for such data to be incorporated into existing models of conceptual representation. PMID:25196838

  7. Atypical Right Hemisphere Specialization for Object Representations in an Adolescent with Specific Language Impairment

    PubMed Central

    Brown, Timothy T.; Erhart, Matthew; Avesar, Daniel; Dale, Anders M.; Halgren, Eric; Evans, Julia L.

    2014-01-01

    Individuals with a diagnosis of specific language impairment (SLI) show abnormal spoken language occurring alongside normal non-verbal abilities. Behaviorally, people with SLI exhibit diverse profiles of impairment involving phonological, grammatical, syntactic, and semantic aspects of language. In this study, we used a multimodal neuroimaging technique called anatomically constrained magnetoencephalography (aMEG) to measure the dynamic functional brain organization of an adolescent with SLI. Using single-subject statistical maps of cortical activity, we compared this patient to a sibling and to a cohort of typically developing subjects during the performance of tasks designed to evoke semantic representations of concrete objects. Localized patterns of brain activity within the language impaired patient showed marked differences from the typical functional organization, with significant engagement of right hemisphere heteromodal cortical regions generally homotopic to the left hemisphere areas that usually show the greatest activity for such tasks. Functional neuroanatomical differences were evident at early sensoriperceptual processing stages and continued through later cognitive stages, observed specifically at latencies typically associated with semantic encoding operations. Our findings show with real-time temporal specificity evidence for an atypical right hemisphere specialization for the representation of concrete entities, independent of verbal motor demands. More broadly, our results demonstrate the feasibility and potential utility of using aMEG to characterize individual patient differences in the dynamic functional organization of the brain. PMID:24592231

  8. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  9. Virtual 3d City Modeling: Techniques and Applications

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  10. Object Oriented Programming Systems (OOPS) and frame representations: An investigation of programming paradigms

    NASA Technical Reports Server (NTRS)

    Auty, David

    1988-01-01

    The project was initiated to research Object Oriented Programming Systems (OOPS) and frame representation systems, their significance and applicability, and their implementation in or relationship to Ada. Object orientated is currently a very popular conceptual adjective. Object oriented programming, in particular, is promoted as a particularly productive approach to programming; an approach which maximizes opportunities for code reuse and lends itself to the definition of convenient and well-developed units. Such units are thus expected to be usable in a variety of situations, beyond the typical highly specific unit development of other approaches. Frame represenation systems share a common heritage and similar conceptual foundations. Together they represent a quickly emerging alternative approach to programming. The approach is to first define the terms, starting with relevant concepts and using these to put bounds on what is meant by OOPS and Frames. From this the possibilities were pursued to merge OOPS with Ada which will further elucidate the significant characteristics which make up this programming approach. Finally, some of the merits and demerits of OOPS were briefly considered as a way of addressing the applicability of OOPS to various programming tasks.

  11. Customised 3D Printing: An Innovative Training Tool for the Next Generation of Orbital Surgeons.

    PubMed

    Scawn, Richard L; Foster, Alex; Lee, Bradford W; Kikkawa, Don O; Korn, Bobby S

    2015-01-01

    Additive manufacturing or 3D printing is the process by which three dimensional data fields are translated into real-life physical representations. 3D printers create physical printouts using heated plastics in a layered fashion resulting in a three-dimensional object. We present a technique for creating customised, inexpensive 3D orbit models for use in orbital surgical training using 3D printing technology. These models allow trainee surgeons to perform 'wet-lab' orbital decompressions and simulate upcoming surgeries on orbital models that replicate a patient's bony anatomy. We believe this represents an innovative training tool for the next generation of orbital surgeons. PMID:26121063

  12. Automatic frame-centered object representation and integration revealed by iconic memory, visual priming, and backward masking

    PubMed Central

    Lin, Zhicheng; He, Sheng

    2012-01-01

    Object identities (“what”) and their spatial locations (“where”) are processed in distinct pathways in the visual system, raising the question of how the what and where information is integrated. Because of object motions and eye movements, the retina-based representations are unstable, necessitating nonretinotopic representation and integration. A potential mechanism is to code and update objects according to their reference frames (i.e., frame-centered representation and integration). To isolate frame-centered processes, in a frame-to-frame apparent motion configuration, we (a) presented two preceding or trailing objects on the same frame, equidistant from the target on the other frame, to control for object-based (frame-based) effect and space-based effect, and (b) manipulated the target's relative location within its frame to probe frame-centered effect. We show that iconic memory, visual priming, and backward masking depend on objects' relative frame locations, orthogonal of the retinotopic coordinate. These findings not only reveal that iconic memory, visual priming, and backward masking can be nonretinotopic but also demonstrate that these processes are automatically constrained by contextual frames through a frame-centered mechanism. Thus, object representation is robustly and automatically coupled to its reference frame and continuously being updated through a frame-centered, location-specific mechanism. These findings lead to an object cabinet framework, in which objects (“files”) within the reference frame (“cabinet”) are orderly coded relative to the frame. PMID:23104817

  13. Topographic representations of object size and relationships with numerosity reveal generalized quantity processing in human parietal cortex

    PubMed Central

    Harvey, Ben M.; Fracasso, Alessio; Petridou, Natalia; Dumoulin, Serge O.

    2015-01-01

    Humans and many animals analyze sensory information to estimate quantities that guide behavior and decisions. These quantities include numerosity (object number) and object size. Having recently demonstrated topographic maps of numerosity, we ask whether the brain also contains maps of object size. Using ultra-high-field (7T) functional MRI and population receptive field modeling, we describe tuned responses to visual object size in bilateral human posterior parietal cortex. Tuning follows linear Gaussian functions and shows surround suppression, and tuning width narrows with increasing preferred object size. Object size-tuned responses are organized in bilateral topographic maps, with similar cortical extents responding to large and small objects. These properties of object size tuning and map organization all differ from the numerosity representation, suggesting that object size and numerosity tuning result from distinct mechanisms. However, their maps largely overlap and object size preferences correlate with numerosity preferences, suggesting associated representations of these two quantities. Object size preferences here show no discernable relation to visual position preferences found in visuospatial receptive fields. As such, object size maps (much like numerosity maps) do not reflect sensory organ structure but instead emerge within the brain. We speculate that, as in sensory processing, optimization of cognitive processing using topographic maps may be a common organizing principle in association cortex. Interactions between object size and numerosity maps may associate cognitive representations of these related features, potentially allowing consideration of both quantities together when making decisions. PMID:26483452

  14. Topographic representations of object size and relationships with numerosity reveal generalized quantity processing in human parietal cortex.

    PubMed

    Harvey, Ben M; Fracasso, Alessio; Petridou, Natalia; Dumoulin, Serge O

    2015-11-01

    Humans and many animals analyze sensory information to estimate quantities that guide behavior and decisions. These quantities include numerosity (object number) and object size. Having recently demonstrated topographic maps of numerosity, we ask whether the brain also contains maps of object size. Using ultra-high-field (7T) functional MRI and population receptive field modeling, we describe tuned responses to visual object size in bilateral human posterior parietal cortex. Tuning follows linear Gaussian functions and shows surround suppression, and tuning width narrows with increasing preferred object size. Object size-tuned responses are organized in bilateral topographic maps, with similar cortical extents responding to large and small objects. These properties of object size tuning and map organization all differ from the numerosity representation, suggesting that object size and numerosity tuning result from distinct mechanisms. However, their maps largely overlap and object size preferences correlate with numerosity preferences, suggesting associated representations of these two quantities. Object size preferences here show no discernable relation to visual position preferences found in visuospatial receptive fields. As such, object size maps (much like numerosity maps) do not reflect sensory organ structure but instead emerge within the brain. We speculate that, as in sensory processing, optimization of cognitive processing using topographic maps may be a common organizing principle in association cortex. Interactions between object size and numerosity maps may associate cognitive representations of these related features, potentially allowing consideration of both quantities together when making decisions. PMID:26483452

  15. GammaModeler TM 3-D gamma-ray imaging technology

    SciTech Connect

    2000-09-01

    The 3-D GammaModeler{trademark} system was used to survey a portion of the facility and provide 3-D visual and radiation representation of contaminated equipment located within the facility. The 3-D GammaModeler{trademark} system software was used to deconvolve extended sources into a series of point sources, locate the positions of these sources in space and calculate the 30 cm. dose rates for each of these sources. Localization of the sources in three dimensions provides information on source locations interior to the visual objects and provides a better estimate of the source intensities. The three dimensional representation of the objects can be made transparent in order to visualize sources located within the objects. Positional knowledge of all the sources can be used to calculate a map of the radiation in the canyon. The use of 3-D visual and gamma ray information supports improved planning decision-making, and aids in communications with regulators and stakeholders.

  16. A modular non-negative matrix factorization for parts-based object recognition using subspace representation

    NASA Astrophysics Data System (ADS)

    Bajla, Ivan; Soukup, Daniel

    2008-02-01

    Non-negative matrix factorization of an input data matrix into a matrix of basis vectors and a matrix of encoding coefficients is a subspace representation method that has attracted attention of researches in pattern recognition in the recent period. We have explored crucial aspects of NMF on massive recognition experiments with the ORL database of faces which include intuitively clear parts constituting the whole. Using a principal changing of the learning stage structure and by formulating NMF problems for each of a priori given parts separately, we developed a novel modular NMF algorithm. Although this algorithm provides uniquely separated basis vectors which code individual face parts in accordance with the parts-based principle of the NMF methodology applied to object recognition problems, any significant improvement of recognition rates for occluded parts, predicted in several papers, was not reached. We claim that using the parts-based concept in NMF as a basis for solving recognition problems with occluded objects has not been justified.

  17. Object Representations in the Temporal Cortex of Monkeys and Humans as Revealed by Functional Magnetic Resonance Imaging

    PubMed Central

    Bell, Andrew H.; Hadj-Bouziane, Fadila; Frihauf, Jennifer B.; Tootell, Roger B. H.; Ungerleider, Leslie G.

    2009-01-01

    Increasing evidence suggests that the neural processes associated with identifying everyday stimuli include the classification of those stimuli into a limited number of semantic categories. How the neural representations of these stimuli are organized in the temporal lobe remains under debate. Here we used functional magnetic resonance imaging (fMRI) to identify correlates for three current hypotheses concerning object representations in the inferior temporal (IT) cortex of monkeys and humans: representations based on animacy, semantic categories, or visual features. Subjects were presented with blocked images of faces, body parts (animate stimuli), objects, and places (inanimate stimuli), and multiple overlapping contrasts were used to identify the voxels most selective for each category. Stimulus representations appeared to segregate according to semantic relationships. Discrete regions selective for animate and inanimate stimuli were found in both species. These regions could be further subdivided into regions selective for individual categories. Notably, face-selective regions were contiguous with body-part-selective regions, and object-selective regions were contiguous with place-selective regions. When category-selective regions in monkeys were tested with blocks of single exemplars, individual voxels showed preferences for visually dissimilar exemplars from the same category and voxels with similar preferences tended to cluster together. Our results provide some novel observations with respect to how stimulus representations are organized in IT cortex. In addition, they further support the idea that representations of complex stimuli in IT cortex are organized into multiple hierarchical tiers, encompassing both semantic and physical properties. PMID:19052111

  18. Recognition methods for 3D textured surfaces

    NASA Astrophysics Data System (ADS)

    Cula, Oana G.; Dana, Kristin J.

    2001-06-01

    Texture as a surface representation is the subject of a wide body of computer vision and computer graphics literature. While texture is always associated with a form of repetition in the image, the repeating quantity may vary. The texture may be a color or albedo variation as in a checkerboard, a paisley print or zebra stripes. Very often in real-world scenes, texture is instead due to a surface height variation, e.g. pebbles, gravel, foliage and any rough surface. Such surfaces are referred to here as 3D textured surfaces. Standard texture recognition algorithms are not appropriate for 3D textured surfaces because the appearance of these surfaces changes in a complex manner with viewing direction and illumination direction. Recent methods have been developed for recognition of 3D textured surfaces using a database of surfaces observed under varied imaging parameters. One of these methods is based on 3D textons obtained using K-means clustering of multiscale feature vectors. Another method uses eigen-analysis originally developed for appearance-based object recognition. In this work we develop a hybrid approach that employs both feature grouping and dimensionality reduction. The method is tested using the Columbia-Utrecht texture database and provides excellent recognition rates. The method is compared with existing recognition methods for 3D textured surfaces. A direct comparison is facilitated by empirical recognition rates from the same texture data set. The current method has key advantages over existing methods including requiring less prior information on both the training and novel images.

  19. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The VAX/VMS/DISSPLA implementation of PLOT3D supports 2-D polygons as

  20. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P. G.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The VAX/VMS/DISSPLA implementation of PLOT3D supports 2-D polygons as

  1. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible. PMID:7919882

  2. Improving Semantic Updating Method on 3d City Models Using Hybrid Semantic-Geometric 3d Segmentation Technique

    NASA Astrophysics Data System (ADS)

    Sharkawi, K.-H.; Abdul-Rahman, A.

    2013-09-01

    Cities and urban areas entities such as building structures are becoming more complex as the modern human civilizations continue to evolve. The ability to plan and manage every territory especially the urban areas is very important to every government in the world. Planning and managing cities and urban areas based on printed maps and 2D data are getting insufficient and inefficient to cope with the complexity of the new developments in big cities. The emergence of 3D city models have boosted the efficiency in analysing and managing urban areas as the 3D data are proven to represent the real world object more accurately. It has since been adopted as the new trend in buildings and urban management and planning applications. Nowadays, many countries around the world have been generating virtual 3D representation of their major cities. The growing interest in improving the usability of 3D city models has resulted in the development of various tools for analysis based on the 3D city models. Today, 3D city models are generated for various purposes such as for tourism, location-based services, disaster management and urban planning. Meanwhile, modelling 3D objects are getting easier with the emergence of the user-friendly tools for 3D modelling available in the market. Generating 3D buildings with high accuracy also has become easier with the availability of airborne Lidar and terrestrial laser scanning equipments. The availability and accessibility to this technology makes it more sensible to analyse buildings in urban areas using 3D data as it accurately represent the real world objects. The Open Geospatial Consortium (OGC) has accepted CityGML specifications as one of the international standards for representing and exchanging spatial data, making it easier to visualize, store and manage 3D city models data efficiently. CityGML able to represents the semantics, geometry, topology and appearance of 3D city models in five well-defined Level-of-Details (LoD), namely LoD0

  3. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The UNIX/DISSPLA implementation of PLOT3D supports 2-D polygons as

  4. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The UNIX/DISSPLA implementation of PLOT3D supports 2-D polygons as

  5. Two Eyes, 3D: Stereoscopic Design Principles

    NASA Astrophysics Data System (ADS)

    Price, Aaron; Subbarao, M.; Wyatt, R.

    2013-01-01

    Two Eyes, 3D is a NSF-funded research project about how people perceive highly spatial objects when shown with 2D or stereoscopic ("3D") representations. As part of the project, we produced a short film about SN 2011fe. The high definition film has been rendered in both 2D and stereoscopic formats. It was developed according to a set of stereoscopic design principles we derived from the literature and past experience producing and studying stereoscopic films. Study participants take a pre- and post-test that involves a spatial cognition assessment and scientific knowledge questions about Type-1a supernovae. For the evaluation, participants use iPads in order to record spatial manipulation of the device and look for elements of embodied cognition. We will present early results and also describe the stereoscopic design principles and the rationale behind them. All of our content and software is available under open source licenses. More information is at www.twoeyes3d.org.

  6. Improving automated 3D reconstruction methods via vision metrology

    NASA Astrophysics Data System (ADS)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  7. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1998-01-01

    We proposed a novel characterization of errors for numerical weather predictions. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has several important applications, including the model assessment application and the objective analysis application. In this project, we have focused on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP), the 500 hPa geopotential height, and the 315 K potential vorticity fields for forecasts of the short and medium range. The forecasts are generated by the Goddard Earth Observing System (GEOS) data assimilation system with and without ERS-1 scatterometer data. A great deal of novel work has been accomplished under the current contract. In broad terms, we have developed and tested an efficient algorithm for determining distortions. The algorithm and constraints are now ready for application to larger data sets to be used to determine the statistics of the distortion as outlined above, and to be applied in data analysis by using GEOS water vapor imagery to correct short-term forecast errors.

  8. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1996-01-01

    We study a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and will be required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and a bias correction of forecast anomalies. In brief, the distortion is determined by minimizing the objective function by varying the displacement and bias correction fields. In the present project we use a global or hemispheric domain, and spherical harmonics to represent these fields. In this project we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically we study the forecast errors of the 500 hPa geopotential height field for forecasts of the short and medium range. The forecasts are those of the Goddard Earth Observing System data assimilation system. Results presented show that the methodology works, that a large part of the total error may be explained by a distortion limited to triangular truncation at wavenumber 10, and that the remaining residual error contains mostly small spatial scales.

  9. Neonatal representation of odour objects: distinct memories of the whole and its parts

    PubMed Central

    Coureaud, Gérard; Thomas-Danguin, Thierry; Wilson, Donald A.; Ferreira, Guillaume

    2014-01-01

    Extraction of relevant information from highly complex environments is a prerequisite to survival. Within odour mixtures, such information is contained in the odours of specific elements or in the mixture configuration perceived as a whole unique odour. For instance, an AB mixture of the element A (ethyl isobutyrate) and the element B (ethyl maltol) generates a configural AB percept in humans and apparently in another species, the rabbit. Here, we examined whether the memory of such a configuration is distinct from the memory of the individual odorants. Taking advantage of the newborn rabbit's ability to learn odour mixtures, we combined behavioural and pharmacological tools to specifically eliminate elemental memory of A and B after conditioning to the AB mixture and evaluate consequences on configural memory of AB. The amnesic treatment suppressed responsiveness to A and B but not to AB. Two other experiments confirmed the specific perception and particular memory of the AB mixture. These data demonstrate the existence of configurations in certain odour mixtures and their representation as unique objects: after learning, animals form a configural memory of these mixtures, which coexists with, but is relatively dissociated from, memory of their elements. This capability emerges very early in life. PMID:24990670

  10. Decomposing and Connecting Object Representations in 5- to 9-Year-Old Children's Drawing Behaviour

    ERIC Educational Resources Information Center

    Picard, Delphine; Vinter, Annie

    2006-01-01

    This study aimed at specifying the content of the representational redescription (RR) process assumed by Karmiloff-Smith (1992) with respect to the emergence of inter-representational flexibility in children's drawing behaviour. We hypothesized that the RR process included part-whole decomposition processes that are essential to the ability to…

  11. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  12. 3D geometry applied to atmospheric layers

    NASA Astrophysics Data System (ADS)

    Nadjib Kouahla, Mohamed; Moreels, Guy; Faivre, Michael

    Epipolar geometry is an efficient method for generating 3D representations of objects. Here we present an original application of this method to the case of atmospheric layers. Two synchronized simultaneous images of the same scene are taken in two sites at a distance D. The 36*36 fields of view are oriented face to face along the same line of sight, but in opposite directions. The elevation angle of the optical axis above the horizon is 17. The observed objects are airglow emissions or cirrus clouds or aircraft trails. In the case of clouds, the shape of the objects is diffuse. To obtain a superposition of the common observed zone, it is necessary to calculate a normalized cross-correlation coefficient (NCC) to identify pairs of matching points in both images. The perspective effect in the rectangular images is inverted to produce a satellite-type view of the atmospheric layer as could be seen from an overlying satellite. We developed a triangulation algorithm to retrieve the 3D surface of the observed layer. The stereoscopic method was used to retrieve the wavy structure of the OH emissive layer at the altitude of 87 km. The distance between the observing sites was 600 km. Results obtained in Peru from the sites of Cerro Cosmos and Cerro Verde will be presented. We are currently extending the stereoscopic procedure to the study of troposphere cirruses, of natural origin or induced by aircraft engines. In this case, the distance between observation sites is D 60 km.

  13. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  14. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In addition to providing the advantages of performing complex

  15. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In addition to providing the advantages of performing complex

  16. Topographic representation of an occluded object and the effects of spatiotemporal context in human early visual areas.

    PubMed

    Ban, Hiroshi; Yamamoto, Hiroki; Hanakawa, Takashi; Urayama, Shin-Ichi; Aso, Toshihiko; Fukuyama, Hidenao; Ejima, Yoshimichi

    2013-10-23

    Occlusion is a primary challenge facing the visual system in perceiving object shapes in intricate natural scenes. Although behavior, neurophysiological, and modeling studies have shown that occluded portions of objects may be completed at the early stage of visual processing, we have little knowledge on how and where in the human brain the completion is realized. Here, we provide functional magnetic resonance imaging (fMRI) evidence that the occluded portion of an object is indeed represented topographically in human V1 and V2. Specifically, we find the topographic cortical responses corresponding to the invisible object rotation in V1 and V2. Furthermore, by investigating neural responses for the occluded target rotation within precisely defined cortical subregions, we could dissociate the topographic neural representation of the occluded portion from other types of neural processing such as object edge processing. We further demonstrate that the early topographic representation in V1 can be modulated by prior knowledge of a whole appearance of an object obtained before partial occlusion. These findings suggest that primary "visual" area V1 has the ability to process not only visible or virtually (illusorily) perceived objects but also "invisible" portions of objects without concurrent visual sensation such as luminance enhancement to these portions. The results also suggest that low-level image features and higher preceding cognitive context are integrated into a unified topographic representation of occluded portion in early areas. PMID:24155304

  17. 3D Game Content Distributed Adaptation in Heterogeneous Environments

    NASA Astrophysics Data System (ADS)

    Morán, Francisco; Preda, Marius; Lafruit, Gauthier; Villegas, Paulo; Berretty, Robert-Paul

    2007-12-01

    Most current multiplayer 3D games can only be played on a single dedicated platform (a particular computer, console, or cell phone), requiring specifically designed content and communication over a predefined network. Below we show how, by using signal processing techniques such as multiresolution representation and scalable coding for all the components of a 3D graphics object (geometry, texture, and animation), we enable online dynamic content adaptation, and thus delivery of the same content over heterogeneous networks to terminals with very different profiles, and its rendering on them. We present quantitative results demonstrating how the best displayed quality versus computational complexity versus bandwidth tradeoffs have been achieved, given the distributed resources available over the end-to-end content delivery chain. Additionally, we use state-of-the-art, standardised content representation and compression formats (MPEG-4 AFX, JPEG 2000, XML), enabling deployment over existing infrastructure, while keeping hooks to well-established practices in the game industry.

  18. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  19. Lateralized Effects of Categorical and Coordinate Spatial Processing of Component Parts on the Recognition of 3D Non-Nameable Objects

    ERIC Educational Resources Information Center

    Saneyoshi, Ayako; Michimata, Chikashi

    2009-01-01

    Participants performed two object-matching tasks for novel, non-nameable objects consisting of geons. For each original stimulus, two transformations were applied to create comparison stimuli. In the categorical transformation, a geon connected to geon A was moved to geon B. In the coordinate transformation, a geon connected to geon A was moved to…

  20. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In each of these areas, the IRIS implementation of PLOT3D offers

  1. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In each of these areas, the IRIS implementation of PLOT3D offers

  2. 3-D volumetric computed tomographic scoring as an objective outcome measure for chronic rhinosinusitis: Clinical correlations and comparison to Lund-Mackay scoring

    PubMed Central

    Pallanch, John; Yu, Lifeng; Delone, David; Robb, Rich; Holmes, David R.; Camp, Jon; Edwards, Phil; McCollough, Cynthia H.; Ponikau, Jens; Dearking, Amy; Lane, John; Primak, Andrew; Shinkle, Aaron; Hagan, John; Frigas, Evangelo; Ocel, Joseph J.; Tombers, Nicole; Siwani, Rizwan; Orme, Nicholas; Reed, Kurtis; Jerath, Nivedita; Dhillon, Robinder; Kita, Hirohito

    2014-01-01

    Background We aimed to test the hypothesis that 3-D volume-based scoring of computed tomographic (CT) images of the paranasal sinuses was superior to Lund-Mackay CT scoring of disease severity in chronic rhinosinusitis (CRS). We determined correlation between changes in CT scores (using each scoring system) with changes in other measures of disease severity (symptoms, endoscopic scoring, and quality of life) in patients with CRS treated with triamcinolone. Methods The study group comprised 48 adult subjects with CRS. Baseline symptoms and quality of life were assessed. Endoscopy and CT scans were performed. Patients received a single systemic dose of intramuscular triamcinolone and were reevaluated 1 month later. Strengths of the correlations between changes in CT scores and changes in CRS signs and symptoms and quality of life were determined. Results We observed some variability in degree of improvement for the different symptom, endoscopic, and quality-of-life parameters after treatment. Improvement of parameters was significantly correlated with improvement in CT disease score using both CT scoring methods. However, volumetric CT scoring had greater correlation with these parameters than Lund-Mackay scoring. Conclusion Volumetric scoring exhibited higher degree of correlation than Lund-Mackay scoring when comparing improvement in CT score with improvement in score for symptoms, endoscopic exam, and quality of life in this group of patients who received beneficial medical treatment for CRS. PMID:24106202

  3. Individual 3D region-of-interest atlas of the human brain: knowledge-based class image analysis for extraction of anatomical objects

    NASA Astrophysics Data System (ADS)

    Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Sabri, Osama; Buell, Udalrich

    2000-06-01

    After neural network-based classification of tissue types, the second step of atlas extraction is knowledge-based class image analysis to get anatomically meaningful objects. Basic algorithms are region growing, mathematical morphology operations, and template matching. A special algorithm was designed for each object. The class label of each voxel and the knowledge about the relative position of anatomical objects to each other and to the sagittal midplane of the brain can be utilized for object extraction. User interaction is only necessary to define starting, mid- and end planes for most object extractions and to determine the number of iterations for erosion and dilation operations. Extraction can be done for the following anatomical brain regions: cerebrum; cerebral hemispheres; cerebellum; brain stem; white matter (e.g., centrum semiovale); gray matter [cortex, frontal, parietal, occipital, temporal lobes, cingulum, insula, basal ganglia (nuclei caudati, putamen, thalami)]. For atlas- based quantification of functional data, anatomical objects can be convoluted with the point spread function of functional data to take into account the different resolutions of morphological and functional modalities. This method allows individual atlas extraction from MRI image data of a patient without the need of warping individual data to an anatomical or statistical MRI brain atlas.

  4. 3D Printing and Its Urologic Applications

    PubMed Central

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology. PMID:26028997

  5. 3D Printing and Its Urologic Applications.

    PubMed

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology. PMID:26028997

  6. 3D Position and Velocity Vector Computations of Objects Jettisoned from the International Space Station Using Close-Range Photogrammetry Approach

    NASA Technical Reports Server (NTRS)

    Papanyan, Valeri; Oshle, Edward; Adamo, Daniel

    2008-01-01

    Measurement of the jettisoned object departure trajectory and velocity vector in the International Space Station (ISS) reference frame is vitally important for prompt evaluation of the object s imminent orbit. We report on the first successful application of photogrammetric analysis of the ISS imagery for the prompt computation of the jettisoned object s position and velocity vectors. As post-EVA analyses examples, we present the Floating Potential Probe (FPP) and the Russian "Orlan" Space Suit jettisons, as well as the near-real-time (provided in several hours after the separation) computations of the Video Stanchion Support Assembly Flight Support Assembly (VSSA-FSA) and Early Ammonia Servicer (EAS) jettisons during the US astronauts space-walk. Standard close-range photogrammetry analysis was used during this EVA to analyze two on-board camera image sequences down-linked from the ISS. In this approach the ISS camera orientations were computed from known coordinates of several reference points on the ISS hardware. Then the position of the jettisoned object for each time-frame was computed from its image in each frame of the video-clips. In another, "quick-look" approach used in near-real time, orientation of the cameras was computed from their position (from the ISS CAD model) and operational data (pan and tilt) then location of the jettisoned object was calculated only for several frames of the two synchronized movies. Keywords: Photogrammetry, International Space Station, jettisons, image analysis.

  7. The distributed representation of random and meaningful object pairs in human occipitotemporal cortex: the weighted average as a general rule.

    PubMed

    Baeck, Annelies; Wagemans, Johan; Op de Beeck, Hans P

    2013-04-15

    Natural scenes typically contain multiple visual objects, often in interaction, such as when a bottle is used to fill a glass. Previous studies disagree about the representation of multiple objects and the role of object position herein, nor did they pinpoint the effect of potential interactions between the objects. In an fMRI study, we presented four single objects in two different positions and object pairs consisting of all possible combinations of the single objects. Objects pairs