Science.gov

Sample records for 3d object representation

  1. The Visual Representation of 3D Object Orientation in Parietal Cortex

    PubMed Central

    Cowan, Noah J.; Angelaki, Dora E.

    2013-01-01

    An accurate representation of three-dimensional (3D) object orientation is essential for interacting with the environment. Where and how the brain visually encodes 3D object orientation remains unknown, but prior studies suggest the caudal intraparietal area (CIP) may be involved. Here, we develop rigorous analytical methods for quantifying 3D orientation tuning curves, and use these tools to the study the neural coding of surface orientation. Specifically, we show that single neurons in area CIP of the rhesus macaque jointly encode the slant and tilt of a planar surface, and that across the population, the distribution of preferred slant-tilts is not statistically different from uniform. This suggests that all slant-tilt combinations are equally represented in area CIP. Furthermore, some CIP neurons are found to also represent the third rotational degree of freedom that determines the orientation of the image pattern on the planar surface. Together, the present results suggest that CIP is a critical neural locus for the encoding of all three rotational degrees of freedom specifying an object's 3D spatial orientation. PMID:24305830

  2. A computational model that recovers the 3D shape of an object from a single 2D retinal representation.

    PubMed

    Li, Yunfeng; Pizlo, Zygmunt; Steinman, Robert M

    2009-05-01

    Human beings perceive 3D shapes veridically, but the underlying mechanisms remain unknown. The problem of producing veridical shape percepts is computationally difficult because the 3D shapes have to be recovered from 2D retinal images. This paper describes a new model, based on a regularization approach, that does this very well. It uses a new simplicity principle composed of four shape constraints: viz., symmetry, planarity, maximum compactness and minimum surface. Maximum compactness and minimum surface have never been used before. The model was tested with random symmetrical polyhedra. It recovered their 3D shapes from a single randomly-chosen 2D image. Neither learning, nor depth perception, was required. The effectiveness of the maximum compactness and the minimum surface constraints were measured by how well the aspect ratio of the 3D shapes was recovered. These constraints were effective; they recovered the aspect ratio of the 3D shapes very well. Aspect ratios recovered by the model were compared to aspect ratios adjusted by four human observers. They also adjusted aspect ratios very well. In those rare cases, in which the human observers showed large errors in adjusted aspect ratios, their errors were very similar to the errors made by the model. PMID:18621410

  3. Formal representation of 3D structural geological models

    NASA Astrophysics Data System (ADS)

    Wang, Zhangang; Qu, Honggang; Wu, Zixing; Yang, Hongjun; Du, Qunle

    2016-05-01

    The development and widespread application of geological modeling methods has increased demands for the integration and sharing services of three dimensional (3D) geological data. However, theoretical research in the field of geological information sciences is limited despite the widespread use of Geographic Information Systems (GIS) in geology. In particular, fundamental research on the formal representations and standardized spatial descriptions of 3D structural models is required. This is necessary for accurate understanding and further applications of geological data in 3D space. In this paper, we propose a formal representation method for 3D structural models using the theory of point set topology, which produces a mathematical definition for the major types of geological objects. The spatial relationships between geologic boundaries, structures, and units are explained in detail using the 9-intersection model. Reasonable conditions for describing the topological space of 3D structural models are also provided. The results from this study can be used as potential support for the standardized representation and spatial quality evaluation of 3D structural models, as well as for specific needs related to model-based management, query, and analysis.

  4. Lifting Object Detection Datasets into 3D.

    PubMed

    Carreira, Joao; Vicente, Sara; Agapito, Lourdes; Batista, Jorge

    2016-07-01

    While data has certainly taken the center stage in computer vision in recent years, it can still be difficult to obtain in certain scenarios. In particular, acquiring ground truth 3D shapes of objects pictured in 2D images remains a challenging feat and this has hampered progress in recognition-based object reconstruction from a single image. Here we propose to bypass previous solutions such as 3D scanning or manual design, that scale poorly, and instead populate object category detection datasets semi-automatically with dense, per-object 3D reconstructions, bootstrapped from:(i) class labels, (ii) ground truth figure-ground segmentations and (iii) a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion and then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions. The visual hull sampling process attempts to intersect an object's projection cone with the cones of minimal subsets of other similar objects among those pictured from certain vantage points. We show that our method is able to produce convincing per-object 3D reconstructions and to accurately estimate cameras viewpoints on one of the most challenging existing object-category detection datasets, PASCAL VOC. We hope that our results will re-stimulate interest on joint object recognition and 3D reconstruction from a single image. PMID:27295458

  5. 3D Modeling Engine Representation Summary Report

    SciTech Connect

    Steven Prescott; Ramprasad Sampath; Curtis Smith; Timothy Yang

    2014-09-01

    Computers have been used for 3D modeling and simulation, but only recently have computational resources been able to give realistic results in a reasonable time frame for large complex models. This summary report addressed the methods, techniques, and resources used to develop a 3D modeling engine to represent risk analysis simulation for advanced small modular reactor structures and components. The simulations done for this evaluation were focused on external events, specifically tsunami floods, for a hypothetical nuclear power facility on a coastline.

  6. Virtual Representations in 3D Learning Environments

    ERIC Educational Resources Information Center

    Shonfeld, Miri; Kritz, Miki

    2013-01-01

    This research explores the extent to which virtual worlds can serve as online collaborative learning environments for students by increasing social presence and engagement. 3D environments enable learning, which simulates face-to-face encounters while retaining the advantages of online learning. Students in Education departments created avatars…

  7. 3D object retrieval using salient views

    PubMed Central

    Shapiro, Linda G.

    2013-01-01

    This paper presents a method for selecting salient 2D views to describe 3D objects for the purpose of retrieval. The views are obtained by first identifying salient points via a learning approach that uses shape characteristics of the 3D points (Atmosukarto and Shapiro in International workshop on structural, syntactic, and statistical pattern recognition, 2008; Atmosukarto and Shapiro in ACM multimedia information retrieval, 2008). The salient views are selected by choosing views with multiple salient points on the silhouette of the object. Silhouette-based similarity measures from Chen et al. (Comput Graph Forum 22(3):223–232, 2003) are then used to calculate the similarity between two 3D objects. Retrieval experiments were performed on three datasets: the Heads dataset, the SHREC2008 dataset, and the Princeton dataset. Experimental results show that the retrieval results using the salient views are comparable to the existing light field descriptor method (Chen et al. in Comput Graph Forum 22(3):223–232, 2003), and our method achieves a 15-fold speedup in the feature extraction computation time. PMID:23833704

  8. Recognition of 3-D Scene with Partially Occluded Objects

    NASA Astrophysics Data System (ADS)

    Lu, Siwei; Wong, Andrew K. C...

    1987-03-01

    This paper presents a robot vision system which is capable of recognizing objects in a 3-D scene and interpreting their spatial relation even though some objects in the scene may be partially occluded by other objects. An algorithm is developed to transform the geometric information from the range data into an attributed hypergraph representation (AHR). A hypergraph monomorphism algorithm is then used to compare the AHR of objects in the scene with a set of complete AHR's of prototypes. The capability of identifying connected components and interpreting various types of edges in the 3-D scene enables us to distinguish objects which are partially blocking each other in the scene. Using structural information stored in the primitive area graph, a heuristic hypergraph monomorphism algorithm provides an effective way for recognizing, locating, and interpreting partially occluded objects in the range image.

  9. 3D-dynamic representation of DNA sequences.

    PubMed

    Wąż, Piotr; Bielińska-Wąż, Dorota

    2014-03-01

    A new 3D graphical representation of DNA sequences is introduced. This representation is called 3D-dynamic representation. It is a generalization of the 2D-dynamic dynamic representation. The sequences are represented by sets of "material points" in the 3D space. The resulting 3D-dynamic graphs are treated as rigid bodies. The descriptors characterizing the graphs are analogous to the ones used in the classical dynamics. The classification diagrams derived from this representation are presented and discussed. Due to the third dimension, "the history of the graph" can be recognized graphically because the 3D-dynamic graph does not overlap with itself. Specific parts of the graphs correspond to specific parts of the sequence. This feature is essential for graphical comparisons of the sequences. Numerically, both 2D and 3D approaches are of high quality. In particular, a difference in a single base between two sequences can be identified and correctly described (one can identify which base) by both 2D and 3D methods. PMID:24567158

  10. Developing Spatial Reasoning Through 3D Representations of the Universe

    NASA Astrophysics Data System (ADS)

    Summers, F.; Eisenhamer, B.; McCallister, D.

    2013-12-01

    Mental models of astronomical objects are often greatly hampered by the flat two-dimensional representation of pictures from telescopes. Lacking experience with the true structures in much of the imagery, there is no basis for anything but the default interpretation of a picture postcard. Using astronomical data and scientific visualizations, our professional development session allows teachers and their students to develop their spatial reasoning while forming more accurate and richer mental models. Examples employed in this session include star positions and constellations, morphologies of both normal and interacting galaxies, shapes of planetary nebulae, and three dimensional structures in star forming regions. Participants examine, imagine, predict, and confront the 3D interpretation of well-known 2D imagery using authentic data from NASA, the Hubble Space Telescope, and other scientific sources. The session's cross-disciplinary nature includes science, math, and artistic reasoning while addressing common cosmic misconceptions. Stars of the Orion Constellation seen in 3D explodes the popular misconception that stars in a constellation are all at the same distance. A scientific visualization of two galaxies colliding provides a 3D comparison for Hubble images of interacting galaxies.

  11. Object-oriented urban 3D spatial data model organization method

    NASA Astrophysics Data System (ADS)

    Li, Jing-wen; Li, Wen-qing; Lv, Nan; Su, Tao

    2015-12-01

    This paper combined the 3d data model with object-oriented organization method, put forward the model of 3d data based on object-oriented method, implemented the city 3d model to quickly build logical semantic expression and model, solved the city 3d spatial information representation problem of the same location with multiple property and the same property with multiple locations, designed the space object structure of point, line, polygon, body for city of 3d spatial database, and provided a new thought and method for the city 3d GIS model and organization management.

  12. 3D ear identification based on sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person's identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm.

  13. 3-D Object Recognition from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  14. 3D ear identification based on sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person's identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm. PMID:24740247

  15. 3D Ear Identification Based on Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person’s identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm. PMID:24740247

  16. Computational integral-imaging reconstruction-based 3-D volumetric target object recognition by using a 3-D reference object.

    PubMed

    Kim, Seung-Cheol; Park, Seok-Chan; Kim, Eun-Soo

    2009-12-01

    In this paper, we propose a novel computational integral-imaging reconstruction (CIIR)-based three-dimensional (3-D) image correlator system for the recognition of 3-D volumetric objects by employing a 3-D reference object. That is, a number of plane object images (POIs) computationally reconstructed from the 3-D reference object are used for the 3-D volumetric target recognition. In other words, simultaneous 3-D image correlations between two sets of target and reference POIs, which are depth-dependently reconstructed by using the CIIR method, are performed for effective recognition of 3-D volumetric objects in the proposed system. Successful experiments with this CIIR-based 3-D image correlator confirmed the feasibility of the proposed method.

  17. 3D PDF - a means of public access to geological 3D - objects, using the example of GTA3D

    NASA Astrophysics Data System (ADS)

    Slaby, Mark-Fabian; Reimann, Rüdiger

    2013-04-01

    In geology, 3D modeling has become very important. In the past, two-dimensional data such as isolines, drilling profiles, or cross-sections based on those, were used to illustrate the subsurface geology, whereas now, we can create complex digital 3D models. These models are produced with special software, such as GOCAD ®. The models can be viewed, only through the software used to create them, or through viewers available for free. The platform-independent PDF (Portable Document Format), enforced by Adobe, has found a wide distribution. This format has constantly evolved over time. Meanwhile, it is possible to display CAD data in an Adobe 3D PDF file with the free Adobe Reader (version 7). In a 3D PDF, a 3D model is freely rotatable and can be assembled from a plurality of objects, which can thus be viewed from all directions on their own. In addition, it is possible to create moveable cross-sections (profiles), and to assign transparency to the objects. Based on industry-standard CAD software, 3D PDFs can be generated from a large number of formats, or even be exported directly from this software. In geoinformatics, different approaches to creating 3D PDFs exist. The intent of the Authority for Mining, Energy and Geology to allow free access to the models of the Geotectonic Atlas (GTA3D), could not be realized with standard software solutions. A specially designed code converts the 3D objects to VRML (Virtual Reality Modeling Language). VRML is one of the few formats that allow using image files (maps) as textures, and to represent colors and shapes correctly. The files were merged in Acrobat X Pro, and a 3D PDF was generated subsequently. A topographic map, a display of geographic directions and horizontal and vertical scales help to facilitate the use.

  18. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    NASA Astrophysics Data System (ADS)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  19. Incremental learning of 3D-DCT compact representations for robust visual tracking.

    PubMed

    Li, Xi; Dick, Anthony; Shen, Chunhua; van den Hengel, Anton; Wang, Hanzi

    2013-04-01

    Visual tracking usually requires an object appearance model that is robust to changing illumination, pose, and other factors encountered in video. Many recent trackers utilize appearance samples in previous frames to form the bases upon which the object appearance model is built. This approach has the following limitations: 1) The bases are data driven, so they can be easily corrupted, and 2) it is difficult to robustly update the bases in challenging situations. In this paper, we construct an appearance model using the 3D discrete cosine transform (3D-DCT). The 3D-DCT is based on a set of cosine basis functions which are determined by the dimensions of the 3D signal and thus independent of the input video data. In addition, the 3D-DCT can generate a compact energy spectrum whose high-frequency coefficients are sparse if the appearance samples are similar. By discarding these high-frequency coefficients, we simultaneously obtain a compact 3D-DCT-based object representation and a signal reconstruction-based similarity measure (reflecting the information loss from signal reconstruction). To efficiently update the object representation, we propose an incremental 3D-DCT algorithm which decomposes the 3D-DCT into successive operations of the 2D discrete cosine transform (2D-DCT) and 1D discrete cosine transform (1D-DCT) on the input video data. As a result, the incremental 3D-DCT algorithm only needs to compute the 2D-DCT for newly added frames as well as the 1D-DCT along the third dimension, which significantly reduces the computational complexity. Based on this incremental 3D-DCT algorithm, we design a discriminative criterion to evaluate the likelihood of a test sample belonging to the foreground object. We then embed the discriminative criterion into a particle filtering framework for object state inference over time. Experimental results demonstrate the effectiveness and robustness of the proposed tracker.

  20. Acquiring 3-D Spatial Data Of A Real Object

    NASA Astrophysics Data System (ADS)

    Wu, C. K.; Wang, D. Q.; Bajcsy, R. K...

    1983-10-01

    A method of acquiring spatial data of a real object via a stereometric system is presented. Three-dimensional (3-D) data of an object are acquired by: (1) camera calibration; (2) stereo matching; (3) multiple stereo views covering the whole object; (4) geometrical computations to determine the 3-D coordinates for each sample point of the object. The analysis and the experimental results indicate the method implemented is capable of measuring the spatial data of a real object with satisfactory accuracy.

  1. Design of 3d Topological Data Structure for 3d Cadastre Objects

    NASA Astrophysics Data System (ADS)

    Zulkifli, N. A.; Rahman, A. Abdul; Hassan, M. I.

    2016-09-01

    This paper describes the design of 3D modelling and topological data structure for cadastre objects based on Land Administration Domain Model (LADM) specifications. Tetrahedral Network (TEN) is selected as a 3D topological data structure for this project. Data modelling is based on the LADM standard and it is used five classes (i.e. point, boundary face string, boundary face, tetrahedron and spatial unit). This research aims to enhance the current cadastral system by incorporating 3D topology model based on LADM standard.

  2. Extension of RCC Topological Relations for 3d Complex Objects Components Extracted from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Xing, Xu-Feng; Abolfazl Mostafavia, Mir; Wang, Chen

    2016-06-01

    Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in R3. Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in R3. We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.

  3. 3D model-based still image object categorization

    NASA Astrophysics Data System (ADS)

    Petre, Raluca-Diana; Zaharia, Titus

    2011-09-01

    This paper proposes a novel recognition scheme algorithm for semantic labeling of 2D object present in still images. The principle consists of matching unknown 2D objects with categorized 3D models in order to infer the semantics of the 3D object to the image. We tested our new recognition framework by using the MPEG-7 and Princeton 3D model databases in order to label unknown images randomly selected from the web. Results obtained show promising performances, with recognition rate up to 84%, which opens interesting perspectives in terms of semantic metadata extraction from still images/videos.

  4. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  5. Geoinformation techniques for the 3D visualisation of historic buildings and representation of a building's pathology

    NASA Astrophysics Data System (ADS)

    Tsilimantou, Elisavet; Delegou, Ekaterini; Ioannidis, Charalabos; Moropoulou, Antonia

    2016-08-01

    In this paper, the documentation of an historic building registered as Cultural Heritage asset is presented. The aim of the survey is to create a 3D geometric representation of a historic building and in accordance with multidisciplinary study extract useful information regarding the extent of degradation, constructions' durability etc. For the implementation of the survey, a combination of different types of acquisition technologies is used. The project focuses on the study of Villa Klonaridi, in Athens, Greece. For the complete documentation of the building, conventional topography, photogrammetric and laser scanning techniques is combined. Close range photogrammetric techniques are used for the acquisition of the façades and architectural details. One of the main objectives is the development of an accurate 3D model, where the photorealistic representation of the building is achieved, along with the decay pathology, historical phases and architectural components. In order to achieve a suitable graphical representation for the study of the material and decay patterns beyond the 2D representation, 3D modelling and additional information modelling is performed for comparative analysis. The study provides various conclusions regarding the scale of deterioration obtained by the 2D and 3D analysis respectively. Considering the variation in material and decay patterns, comparative results are obtained regarding the degradation of the building. Overall, the paper describes a process performed on a Historic Building, where the 3D digital acquisition of the monuments' structure is realized with the combination of close range surveying and laser scanning methods.

  6. An Evaluative Review of Simulated Dynamic Smart 3d Objects

    NASA Astrophysics Data System (ADS)

    Romeijn, H.; Sheth, F.; Pettit, C. J.

    2012-07-01

    Three-dimensional (3D) modelling of plants can be an asset for creating agricultural based visualisation products. The continuum of 3D plants models ranges from static to dynamic objects, also known as smart 3D objects. There is an increasing requirement for smarter simulated 3D objects that are attributed mathematically and/or from biological inputs. A systematic approach to plant simulation offers significant advantages to applications in agricultural research, particularly in simulating plant behaviour and the influences of external environmental factors. This approach of 3D plant object visualisation is primarily evident from the visualisation of plants using photographed billboarded images, to more advanced procedural models that come closer to simulating realistic virtual plants. However, few programs model physical reactions of plants to external factors and even fewer are able to grow plants based on mathematical and/or biological parameters. In this paper, we undertake an evaluation of plant-based object simulation programs currently available, with a focus upon the components and techniques involved in producing these objects. Through an analytical review process we consider the strengths and weaknesses of several program packages, the features and use of these programs and the possible opportunities in deploying these for creating smart 3D plant-based objects to support agricultural research and natural resource management. In creating smart 3D objects the model needs to be informed by both plant physiology and phenology. Expert knowledge will frame the parameters and procedures that will attribute the object and allow the simulation of dynamic virtual plants. Ultimately, biologically smart 3D virtual plants that react to changes within an environment could be an effective medium to visually represent landscapes and communicate land management scenarios and practices to planners and decision-makers.

  7. 3D object hiding using three-dimensional ptychography

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Wang, Zhibo; Li, Tuo; Pan, An; Wang, Yali; Shi, Yishi

    2016-09-01

    We present a novel technique for 3D object hiding by applying three-dimensional ptychography. Compared with 3D information hiding based on holography, the proposed ptychography-based hiding technique is easier to implement, because the reference beam and high-precision interferometric optical setup are not required. The acquisition of the 3D object and the ptychographic encoding process are performed optically. Owing to the introduction of probe keys, the security of the ptychography-based hiding system is significantly enhanced. A series of experiments and simulations demonstrate the feasibility and imperceptibility of the proposed method.

  8. Viewpoint-independent 3D object segmentation for randomly stacked objects using optical object detection

    NASA Astrophysics Data System (ADS)

    Chen, Liang-Chia; Nguyen, Thanh-Hung; Lin, Shyh-Tsong

    2015-10-01

    This work proposes a novel approach to segmenting randomly stacked objects in unstructured 3D point clouds, which are acquired by a random-speckle 3D imaging system for the purpose of automated object detection and reconstruction. An innovative algorithm is proposed; it is based on a novel concept of 3D watershed segmentation and the strategies for resolving over-segmentation and under-segmentation problems. Acquired 3D point clouds are first transformed into a corresponding orthogonally projected depth map along the optical imaging axis of the 3D sensor. A 3D watershed algorithm based on the process of distance transformation is then performed to detect the boundary, called the edge dam, between stacked objects and thereby to segment point clouds individually belonging to two stacked objects. Most importantly, an object-matching algorithm is developed to solve the over- and under-segmentation problems that may arise during the watershed segmentation. The feasibility and effectiveness of the method are confirmed experimentally. The results reveal that the proposed method is a fast and effective scheme for the detection and reconstruction of a 3D object in a random stack of such objects. In the experiments, the precision of the segmentation exceeds 95% and the recall exceeds 80%.

  9. 3D laser imaging for concealed object identification

    NASA Astrophysics Data System (ADS)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  10. 3D dimeron as a stable topological object

    NASA Astrophysics Data System (ADS)

    Yang, Shijie; Liu, Yongkai

    2015-03-01

    Searching for novel topological objects is always an intriguing task for scientists in various fields. We study a new three-dimensional (3D) topological structure called 3D dimeron in the trapped two-component Bose-Einstein condensates. The 3D dimeron differs to the conventional 3D skyrmion for the condensates hosting two interlocked vortex-rings. We demonstrate that the vortex-rings are connected by a singular string and the complexity constitutes a vortex-molecule. The stability is investigated through numerically evolving the Gross-Pitaevskii equations, giving a coherent Rabi coupling between the two components. Alternatively, we find that the stable 3D dimeron can be naturally generated from a vortex-free Gaussian wave packet via incorporating a synthetic non-Abelian gauge potential into the condensates. This work is supported by the NSF of China under Grant No. 11374036 and the National 973 program under Grant No. 2012CB821403.

  11. Embedding objects during 3D printing to add new functionalities.

    PubMed

    Yuen, Po Ki

    2016-07-01

    A novel method for integrating and embedding objects to add new functionalities during 3D printing based on fused deposition modeling (FDM) (also known as fused filament fabrication or molten polymer deposition) is presented. Unlike typical 3D printing, FDM-based 3D printing could allow objects to be integrated and embedded during 3D printing and the FDM-based 3D printed devices do not typically require any post-processing and finishing. Thus, various fluidic devices with integrated glass cover slips or polystyrene films with and without an embedded porous membrane, and optical devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber were 3D printed to demonstrate the versatility of the FDM-based 3D printing and embedding method. Fluid perfusion flow experiments with a blue colored food dye solution were used to visually confirm fluid flow and/or fluid perfusion through the embedded porous membrane in the 3D printed fluidic devices. Similar to typical 3D printed devices, FDM-based 3D printed devices are translucent at best unless post-polishing is performed and optical transparency is highly desirable in any fluidic devices; integrated glass cover slips or polystyrene films would provide a perfect optical transparent window for observation and visualization. In addition, they also provide a compatible flat smooth surface for biological or biomolecular applications. The 3D printed fluidic devices with an embedded porous membrane are applicable to biological or chemical applications such as continuous perfusion cell culture or biocatalytic synthesis but without the need for any post-device assembly and finishing. The 3D printed devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber would have applications in display, illumination, or optical applications. Furthermore, the FDM-based 3D printing and embedding method could also be utilized to print casting molds with an integrated glass bottom for polydimethylsiloxane (PDMS) device replication

  12. Embedding objects during 3D printing to add new functionalities.

    PubMed

    Yuen, Po Ki

    2016-07-01

    A novel method for integrating and embedding objects to add new functionalities during 3D printing based on fused deposition modeling (FDM) (also known as fused filament fabrication or molten polymer deposition) is presented. Unlike typical 3D printing, FDM-based 3D printing could allow objects to be integrated and embedded during 3D printing and the FDM-based 3D printed devices do not typically require any post-processing and finishing. Thus, various fluidic devices with integrated glass cover slips or polystyrene films with and without an embedded porous membrane, and optical devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber were 3D printed to demonstrate the versatility of the FDM-based 3D printing and embedding method. Fluid perfusion flow experiments with a blue colored food dye solution were used to visually confirm fluid flow and/or fluid perfusion through the embedded porous membrane in the 3D printed fluidic devices. Similar to typical 3D printed devices, FDM-based 3D printed devices are translucent at best unless post-polishing is performed and optical transparency is highly desirable in any fluidic devices; integrated glass cover slips or polystyrene films would provide a perfect optical transparent window for observation and visualization. In addition, they also provide a compatible flat smooth surface for biological or biomolecular applications. The 3D printed fluidic devices with an embedded porous membrane are applicable to biological or chemical applications such as continuous perfusion cell culture or biocatalytic synthesis but without the need for any post-device assembly and finishing. The 3D printed devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber would have applications in display, illumination, or optical applications. Furthermore, the FDM-based 3D printing and embedding method could also be utilized to print casting molds with an integrated glass bottom for polydimethylsiloxane (PDMS) device replication

  13. Learning from graphically integrated 2D and 3D representations improves retention of neuroanatomy

    NASA Astrophysics Data System (ADS)

    Naaz, Farah

    Visualizations in the form of computer-based learning environments are highly encouraged in science education, especially for teaching spatial material. Some spatial material, such as sectional neuroanatomy, is very challenging to learn. It involves learning the two dimensional (2D) representations that are sampled from the three dimensional (3D) object. In this study, a computer-based learning environment was used to explore the hypothesis that learning sectional neuroanatomy from a graphically integrated 2D and 3D representation will lead to better learning outcomes than learning from a sequential presentation. The integrated representation explicitly demonstrates the 2D-3D transformation and should lead to effective learning. This study was conducted using a computer graphical model of the human brain. There were two learning groups: Whole then Sections, and Integrated 2D3D. Both groups learned whole anatomy (3D neuroanatomy) before learning sectional anatomy (2D neuroanatomy). The Whole then Sections group then learned sectional anatomy using 2D representations only. The Integrated 2D3D group learned sectional anatomy from a graphically integrated 3D and 2D model. A set of tests for generalization of knowledge to interpreting biomedical images was conducted immediately after learning was completed. The order of presentation of the tests of generalization of knowledge was counterbalanced across participants to explore a secondary hypothesis of the study: preparation for future learning. If the computer-based instruction programs used in this study are effective tools for teaching anatomy, the participants should continue learning neuroanatomy with exposure to new representations. A test of long-term retention of sectional anatomy was conducted 4-8 weeks after learning was completed. The Integrated 2D3D group was better than the Whole then Sections

  14. Representing 3D virtual objects: interaction between visuo-spatial ability and type of exploration.

    PubMed

    Meijer, Frank; van den Broek, Egon L

    2010-03-17

    We investigated individual differences in interactively exploring 3D virtual objects. 36 participants explored 24 simple and 24 difficult objects (composed of respectively three and five Biederman geons) actively, passively, or not at all. Both their 3D mental representation of the objects and visuo-spatial ability was assessed. Results show that, regardless of the object's complexity, people with a low VSA benefit from active exploration of objects, where people with a middle or high VSA do not. These findings extend and refine earlier research on interactively learning visuo-spatial information and underline the importance to take individual differences into account. PMID:20116394

  15. Representing 3D virtual objects: interaction between visuo-spatial ability and type of exploration.

    PubMed

    Meijer, Frank; van den Broek, Egon L

    2010-03-17

    We investigated individual differences in interactively exploring 3D virtual objects. 36 participants explored 24 simple and 24 difficult objects (composed of respectively three and five Biederman geons) actively, passively, or not at all. Both their 3D mental representation of the objects and visuo-spatial ability was assessed. Results show that, regardless of the object's complexity, people with a low VSA benefit from active exploration of objects, where people with a middle or high VSA do not. These findings extend and refine earlier research on interactively learning visuo-spatial information and underline the importance to take individual differences into account.

  16. Sketch-driven mental 3D object retrieval

    NASA Astrophysics Data System (ADS)

    Napoléon, Thibault; Sahbi, Hichem

    2010-02-01

    3D object recognition and retrieval recently gained a big interest because of the limitation of the "2D-to-2D" approaches. The latter suffer from several drawbacks such as the lack of information (due for instance to occlusion), pose sensitivity, illumination changes, etc. Our main motivation is to gather both discrimination and easy interaction by allowing simple (but multiple) 2D specifications of queries and their retrieval into 3D gallery sets. We introduce a novel "2D sketch-to-3D model" retrieval framework with the following contributions: (i) first a novel generative approach for aligning and normalizing the pose of 3D gallery objects and extracting their 2D canonical views is introduced. (ii) Afterwards, robust and compact contour signatures are extracted using the set of 2D canonical views. We also introduce a pruning approach to speedup the whole search process in a coarseto- fine way. (iii) Finally, object ranking is performed using our variant of elastic dynamic programming which considers only a subset of possible matches thereby providing a considerable gain in performance for the same amount of errors. Our experiments are reported/compared through the Princeton Shape Benchmark; clearly showing the good performance of our framework w.r.t. the other approaches. An iPhone demo of this method is available and allows us to achieve "2D sketch to 3D object" querying and interaction.

  17. A 3-D measurement system using object-oriented FORTH

    SciTech Connect

    Butterfield, K.B.

    1989-01-01

    Discussed is a system for storing 3-D measurements of points that relates the coordinate system of the measurement device to the global coordinate system. The program described here used object-oriented FORTH to store the measured points as sons of the measuring device location. Conversion of local coordinates to absolute coordinates is performed by passing messages to the point objects. Modifications to the object-oriented FORTH system are also described. 1 ref.

  18. Algorithms for Haptic Rendering of 3D Objects

    NASA Technical Reports Server (NTRS)

    Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam

    2003-01-01

    Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).

  19. The Fermion Representation of Quantum Toroidal Algebra on 3D Young Diagrams

    NASA Astrophysics Data System (ADS)

    Cai, Li-Qiang; Wang, Li-Fang; Wu, Ke; Yang, Jie

    2014-07-01

    We develop an equivalence between the diagonal slices and the perpendicular slices of 3D Young diagrams via Maya diagrams. Furthermore, we construct the fermion representation of quantum toroidal algebra on the 3D Young diagrams perpendicularly sliced.

  20. Objective breast symmetry evaluation using 3-D surface imaging.

    PubMed

    Eder, Maximilian; Waldenfels, Fee V; Swobodnik, Alexandra; Klöppel, Markus; Pape, Ann-Kathrin; Schuster, Tibor; Raith, Stefan; Kitzler, Elena; Papadopulos, Nikolaos A; Machens, Hans-Günther; Kovacs, Laszlo

    2012-04-01

    This study develops an objective breast symmetry evaluation using 3-D surface imaging (Konica-Minolta V910(®) scanner) by superimposing the mirrored left breast over the right and objectively determining the mean 3-D contour difference between the 2 breast surfaces. 3 observers analyzed the evaluation protocol precision using 2 dummy models (n = 60), 10 test subjects (n = 300), clinically tested it on 30 patients (n = 900) and compared it to established 2-D measurements on 23 breast reconstructive patients using the BCCT.core software (n = 690). Mean 3-D evaluation precision, expressed as the coefficient of variation (VC), was 3.54 ± 0.18 for all human subjects without significant intra- and inter-observer differences (p > 0.05). The 3-D breast symmetry evaluation is observer independent, significantly more precise (p < 0.001) than the BCCT.core software (VC = 6.92 ± 0.88) and may play a part in an objective surgical outcome analysis after incorporation into clinical practice.

  1. 3-D object recognition using 2-D views.

    PubMed

    Li, Wenjing; Bebis, George; Bourbakis, Nikolaos G

    2008-11-01

    We consider the problem of recognizing 3-D objects from 2-D images using geometric models and assuming different viewing angles and positions. Our goal is to recognize and localize instances of specific objects (i.e., model-based) in a scene. This is in contrast to category-based object recognition methods where the goal is to search for instances of objects that belong to a certain visual category (e.g., faces or cars). The key contribution of our work is improving 3-D object recognition by integrating Algebraic Functions of Views (AFoVs), a powerful framework for predicting the geometric appearance of an object due to viewpoint changes, with indexing and learning. During training, we compute the space of views that groups of object features can produce under the assumption of 3-D linear transformations, by combining a small number of reference views that contain the object features using AFoVs. Unrealistic views (e.g., due to the assumption of 3-D linear transformations) are eliminated by imposing a pair of rigidity constraints based on knowledge of the transformation between the reference views of the object. To represent the space of views that an object can produce compactly while allowing efficient hypothesis generation during recognition, we propose combining indexing with learning in two stages. In the first stage, we sample the space of views of an object sparsely and represent information about the samples using indexing. In the second stage, we build probabilistic models of shape appearance by sampling the space of views of the object densely and learning the manifold formed by the samples. Learning employs the Expectation-Maximization (EM) algorithm and takes place in a "universal," lower-dimensional, space computed through Random Projection (RP). During recognition, we extract groups of point features from the scene and we use indexing to retrieve the most feasible model groups that might have produced them (i.e., hypothesis generation). The likelihood

  2. Consistent representations of and conversions between 3D rotations

    NASA Astrophysics Data System (ADS)

    Rowenhorst, D.; Rollett, A. D.; Rohrer, G. S.; Groeber, M.; Jackson, M.; Konijnenberg, P. J.; De Graef, M.

    2015-12-01

    In materials science the orientation of a crystal lattice is described by means of a rotation relative to an external reference frame. A number of rotation representations are in use, including Euler angles, rotation matrices, unit quaternions, Rodrigues-Frank vectors and homochoric vectors. Each representation has distinct advantages and disadvantages with respect to the ease of use for calculations and data visualization. It is therefore convenient to be able to easily convert from one representation to another. However, historically, each representation has been implemented using a set of often tacit conventions; separate research groups would implement different sets of conventions, thereby making the comparison of methods and results difficult and confusing. This tutorial article aims to resolve these ambiguities and provide a consistent set of conventions and conversions between common rotational representations, complete with worked examples and a discussion of the trade-offs necessary to resolve all ambiguities. Additionally, an open source Fortran-90 library of conversion routines for the different representations is made available to the community.

  3. Neural network techniques for invariant recognition and motion tracking of 3-D objects

    SciTech Connect

    Hwang, J.N.; Tseng, Y.H.

    1995-12-31

    Invariant recognition and motion tracking of 3-D objects under partial object viewing are difficult tasks. In this paper, we introduce a new neural network solution that is robust to noise corruption and partial viewing of objects. This method directly utilizes the acquired range data and requires no feature extraction. In the proposed approach, the object is first parametrically represented by a continuous distance transformation neural network (CDTNN) which is trained by the surface points of the exemplar object. When later presented with the surface points of an unknown object, this parametric representation allows the mismatch information to back-propagate through the CDTNN to gradually determine the best similarity transformation (translation and rotation) of the unknown object. The mismatch can be directly measured in the reconstructed representation domain between the model and the unknown object.

  4. Exploring local regularities for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Tian, Huaiwen; Qin, Shengfeng

    2016-09-01

    In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.

  5. Divided attention limits perception of 3-D object shapes.

    PubMed

    Scharff, Alec; Palmer, John; Moore, Cathleen M

    2013-01-01

    Can one perceive multiple object shapes at once? We tested two benchmark models of object shape perception under divided attention: an unlimited-capacity and a fixed-capacity model. Under unlimited-capacity models, shapes are analyzed independently and in parallel. Under fixed-capacity models, shapes are processed at a fixed rate (as in a serial model). To distinguish these models, we compared conditions in which observers were presented with simultaneous or sequential presentations of a fixed number of objects (The extended simultaneous-sequential method: Scharff, Palmer, & Moore, 2011a, 2011b). We used novel physical objects as stimuli, minimizing the role of semantic categorization in the task. Observers searched for a specific object among similar objects. We ensured that non-shape stimulus properties such as color and texture could not be used to complete the task. Unpredictable viewing angles were used to preclude image-matching strategies. The results rejected unlimited-capacity models for object shape perception and were consistent with the predictions of a fixed-capacity model. In contrast, a task that required observers to recognize 2-D shapes with predictable viewing angles yielded an unlimited capacity result. Further experiments ruled out alternative explanations for the capacity limit, leading us to conclude that there is a fixed-capacity limit on the ability to perceive 3-D object shapes.

  6. Multiscale 3-D Shape Representation and Segmentation Using Spherical Wavelets

    PubMed Central

    Nain, Delphine; Haker, Steven; Bobick, Aaron

    2013-01-01

    This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of

  7. Multiscale 3-D shape representation and segmentation using spherical wavelets.

    PubMed

    Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen

    2007-04-01

    This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of

  8. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  9. An object oriented fully 3D tomography visual toolkit.

    PubMed

    Agostinelli, S; Paoli, G

    2001-04-01

    In this paper we present a modern object oriented component object model (COMM) C + + toolkit dedicated to fully 3D cone-beam tomography. The toolkit allows the display and visual manipulation of analytical phantoms, projection sets and volumetric data through a standard Windows graphical user interface. Data input/output is performed using proprietary file formats but import/export of industry standard file formats, including raw binary, Windows bitmap and AVI, ACR/NEMA DICOMM 3 and NCSA HDF is available. At the time of writing built-in implemented data manipulators include a basic phantom ray-tracer and a Matrox Genesis frame grabbing facility. A COMM plug-in interface is provided for user-defined custom backprojector algorithms: a simple Feldkamp ActiveX control, including source code, is provided as an example; our fast Feldkamp plug-in is also available.

  10. Objective 3D face recognition: Evolution, approaches and challenges.

    PubMed

    Smeets, Dirk; Claes, Peter; Vandermeulen, Dirk; Clement, John Gerald

    2010-09-10

    Face recognition is a natural human ability and a widely accepted identification and authentication method. In modern legal settings, a lot of credence is placed on identifications made by eyewitnesses. Consequently these are based on human perception which is often flawed and can lead to situations where identity is disputed. Therefore, there is a clear need to secure identifications in an objective way based on anthropometric measures. Anthropometry has existed for many years and has evolved with each advent of new technology and computing power. As a result of this, face recognition methodology has shifted from a purely 2D image-based approach to the use of 3D facial shape. However, one of the main challenges still remaining is the non-rigid structure of the face, which can change permanently over varying time-scales and briefly with facial expressions. The majority of face recognition methods have been developed by scientists with a very technical background such as biometry, pattern recognition and computer vision. This article strives to bridge the gap between these communities and the forensic science end-users. A concise review of face recognition using 3D shape is given. Methods using 3D shape applied to data embodying facial expressions are tabulated for reference. From this list a categorization of different strategies to deal with expressions is presented. The underlying concepts and practical issues relating to the application of each strategy are given, without going into technical details. The discussion clearly articulates the justification to establish archival, reference databases to compare and evaluate different strategies. PMID:20395086

  11. Objective 3D face recognition: Evolution, approaches and challenges.

    PubMed

    Smeets, Dirk; Claes, Peter; Vandermeulen, Dirk; Clement, John Gerald

    2010-09-10

    Face recognition is a natural human ability and a widely accepted identification and authentication method. In modern legal settings, a lot of credence is placed on identifications made by eyewitnesses. Consequently these are based on human perception which is often flawed and can lead to situations where identity is disputed. Therefore, there is a clear need to secure identifications in an objective way based on anthropometric measures. Anthropometry has existed for many years and has evolved with each advent of new technology and computing power. As a result of this, face recognition methodology has shifted from a purely 2D image-based approach to the use of 3D facial shape. However, one of the main challenges still remaining is the non-rigid structure of the face, which can change permanently over varying time-scales and briefly with facial expressions. The majority of face recognition methods have been developed by scientists with a very technical background such as biometry, pattern recognition and computer vision. This article strives to bridge the gap between these communities and the forensic science end-users. A concise review of face recognition using 3D shape is given. Methods using 3D shape applied to data embodying facial expressions are tabulated for reference. From this list a categorization of different strategies to deal with expressions is presented. The underlying concepts and practical issues relating to the application of each strategy are given, without going into technical details. The discussion clearly articulates the justification to establish archival, reference databases to compare and evaluate different strategies.

  12. Large distance 3D imaging of hidden objects

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  13. Additive manufacturing. Continuous liquid interface production of 3D objects.

    PubMed

    Tumbleston, John R; Shirvanyants, David; Ermoshkin, Nikita; Janusziewicz, Rima; Johnson, Ashley R; Kelly, David; Chen, Kai; Pinschmidt, Robert; Rolland, Jason P; Ermoshkin, Alexander; Samulski, Edward T; DeSimone, Joseph M

    2015-03-20

    Additive manufacturing processes such as 3D printing use time-consuming, stepwise layer-by-layer approaches to object fabrication. We demonstrate the continuous generation of monolithic polymeric parts up to tens of centimeters in size with feature resolution below 100 micrometers. Continuous liquid interface production is achieved with an oxygen-permeable window below the ultraviolet image projection plane, which creates a "dead zone" (persistent liquid interface) where photopolymerization is inhibited between the window and the polymerizing part. We delineate critical control parameters and show that complex solid parts can be drawn out of the resin at rates of hundreds of millimeters per hour. These print speeds allow parts to be produced in minutes instead of hours.

  14. Eye Tracking to Explore the Impacts of Photorealistic 3d Representations in Pedstrian Navigation Performance

    NASA Astrophysics Data System (ADS)

    Dong, Weihua; Liao, Hua

    2016-06-01

    Despite the now-ubiquitous two-dimensional (2D) maps, photorealistic three-dimensional (3D) representations of cities (e.g., Google Earth) have gained much attention by scientists and public users as another option. However, there is no consistent evidence on the influences of 3D photorealism on pedestrian navigation. Whether 3D photorealism can communicate cartographic information for navigation with higher effectiveness and efficiency and lower cognitive workload compared to the traditional symbolic 2D maps remains unknown. This study aims to explore whether the photorealistic 3D representation can facilitate processes of map reading and navigation in digital environments using a lab-based eye tracking approach. Here we show the differences of symbolic 2D maps versus photorealistic 3D representations depending on users' eye-movement and navigation behaviour data. We found that the participants using the 3D representation were less effective, less efficient and were required higher cognitive workload than using the 2D map for map reading. However, participants using the 3D representation performed more efficiently in self-localization and orientation at the complex decision points. The empirical results can be helpful to improve the usability of pedestrian navigation maps in future designs.

  15. Learning the 3-D structure of objects from 2-D views depends on shape, not format

    PubMed Central

    Tian, Moqian; Yamins, Daniel; Grill-Spector, Kalanit

    2016-01-01

    Humans can learn to recognize new objects just from observing example views. However, it is unknown what structural information enables this learning. To address this question, we manipulated the amount of structural information given to subjects during unsupervised learning by varying the format of the trained views. We then tested how format affected participants' ability to discriminate similar objects across views that were rotated 90° apart. We found that, after training, participants' performance increased and generalized to new views in the same format. Surprisingly, the improvement was similar across line drawings, shape from shading, and shape from shading + stereo even though the latter two formats provide richer depth information compared to line drawings. In contrast, participants' improvement was significantly lower when training used silhouettes, suggesting that silhouettes do not have enough information to generate a robust 3-D structure. To test whether the learned object representations were format-specific or format-invariant, we examined if learning novel objects from example views transfers across formats. We found that learning objects from example line drawings transferred to shape from shading and vice versa. These results have important implications for theories of object recognition because they suggest that (a) learning the 3-D structure of objects does not require rich structural cues during training as long as shape information of internal and external features is provided and (b) learning generates shape-based object representations independent of the training format. PMID:27153196

  16. A Skeleton-Based 3D Shape Reconstruction of Free-Form Objects with Stereo Vision

    NASA Astrophysics Data System (ADS)

    Saini, Deepika; Kumar, Sanjeev

    2015-12-01

    In this paper, an efficient approach is proposed for recovering the 3D shape of a free-form object from its arbitrary pair of stereo images. In particular, the reconstruction problem is treated as the reconstruction of the skeleton and the external boundary of the object. The reconstructed skeleton is termed as the line-like representation or curve-skeleton of the 3D object. The proposed solution for object reconstruction is based on this evolved curve-skeleton. It is used as a seed for recovering shape of the 3D object, and the extracted boundary is used for terminating the growing process of the object. NURBS-skeleton is used to extract the skeleton of both views. Affine invariant property of the convex hulls is used to establish the correspondence between the skeletons and boundaries in the stereo images. In the growing process, a distance field is defined for each skeleton point as the smallest distance from that point to the boundary of the object. A sphere centered at a skeleton point of radius equal to the minimum distance to the boundary is tangential to the boundary. Filling in the spheres centered at each skeleton point reconstructs the object. Several results are presented in order to check the applicability and validity of the proposed algorithm.

  17. 3D hierarchical spatial representation and memory of multimodal sensory data

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Dow, Paul A.; Huber, David J.

    2009-04-01

    This paper describes an efficient method and system for representing, processing and understanding multi-modal sensory data. More specifically, it describes a computational method and system for how to process and remember multiple locations in multimodal sensory space (e.g., visual, auditory, somatosensory, etc.). The multimodal representation and memory is based on a biologically-inspired hierarchy of spatial representations implemented with novel analogues of real representations used in the human brain. The novelty of the work is in the computationally efficient and robust spatial representation of 3D locations in multimodal sensory space as well as an associated working memory for storage and recall of these representations at the desired level for goal-oriented action. We describe (1) A simple and efficient method for human-like hierarchical spatial representations of sensory data and how to associate, integrate and convert between these representations (head-centered coordinate system, body-centered coordinate, etc.); (2) a robust method for training and learning a mapping of points in multimodal sensory space (e.g., camera-visible object positions, location of auditory sources, etc.) to the above hierarchical spatial representations; and (3) a specification and implementation of a hierarchical spatial working memory based on the above for storage and recall at the desired level for goal-oriented action(s). This work is most useful for any machine or human-machine application that requires processing of multimodal sensory inputs, making sense of it from a spatial perspective (e.g., where is the sensory information coming from with respect to the machine and its parts) and then taking some goal-oriented action based on this spatial understanding. A multi-level spatial representation hierarchy means that heterogeneous sensory inputs (e.g., visual, auditory, somatosensory, etc.) can map onto the hierarchy at different levels. When controlling various machine

  18. 3-D Interpolation in Object Perception: Evidence from an Objective Performance Paradigm

    ERIC Educational Resources Information Center

    Kellman, Philip J.; Garrigan, Patrick; Shipley, Thomas F.; Yin, Carol; Machado, Liana

    2005-01-01

    Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D…

  19. Object Segmentation and Ground Truth in 3D Embryonic Imaging.

    PubMed

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  20. Object Segmentation and Ground Truth in 3D Embryonic Imaging

    PubMed Central

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  1. Telecentric scanner for 3D profilometry of very large objects

    NASA Astrophysics Data System (ADS)

    Thibault, Simon; Borra, Ermanno F.; Szapiel, Stan

    1997-09-01

    Triangulation systems that are based on an autosynchronized scanning principle to provide accurate and fast acquisition of 3D shapes are able to scan large fields. It is done generally by a coordinate measuring machine (CMM) carrying a small-volume 3D camera. However the acquisition speed is limited by the CMM movement and also by the image fusion time required to get the complete 3D shape. This paper describes some practical consideration for large volume 3D inspections with emphasis on telecentric scanning. We present the analytical and the optical design of a large telecentric scanner using a large reflective surface. Some results of the laboratory prototype will be presented. We also discuss applications and the viability of this new approach.

  2. Reconstruction and 3D visualisation based on objective real 3D based documentation.

    PubMed

    Bolliger, Michael J; Buck, Ursula; Thali, Michael J; Bolliger, Stephan A

    2012-09-01

    Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (computer tomography, magnetic resonance imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image. PMID:21979427

  3. Automated objective characterization of visual field defects in 3D

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang (Inventor)

    2006-01-01

    A method and apparatus for electronically performing a visual field test for a patient. A visual field test pattern is displayed to the patient on an electronic display device and the patient's responses to the visual field test pattern are recorded. A visual field representation is generated from the patient's responses. The visual field representation is then used as an input into a variety of automated diagnostic processes. In one process, the visual field representation is used to generate a statistical description of the rapidity of change of a patient's visual field at the boundary of a visual field defect. In another process, the area of a visual field defect is calculated using the visual field representation. In another process, the visual field representation is used to generate a statistical description of the volume of a patient's visual field defect.

  4. Two Eyes, 3D Early Results: Stereoscopic vs 2D Representations of Highly Spatial Scientific Imagery

    NASA Astrophysics Data System (ADS)

    Price, Aaron

    2013-06-01

    "Two Eyes, 3D" is a 3-year NSF funded research project to study the educational impacts of using stereoscopic representations in informal settings. The first study conducted as part of the project tested children 5-12 on their ability to perceive spatial elements of slides of scientific objects shown to them in either stereoscopic or 2D format. Children were also tested for prior spatial ability. Early results suggest that stereoscopy does not have a major impact on perceiving spatial elements of an image, but it does have a more significant impact on how the children apply that knowledge when presented with a common sense situation. The project is run by the AAVSO and this study was conducted at the Boston Museum of Science.

  5. True-3D accentuating of grids and streets in urban topographic maps enhances human object location memory.

    PubMed

    Edler, Dennis; Bestgen, Anne-Kathrin; Kuchinke, Lars; Dickmann, Frank

    2015-01-01

    Cognitive representations of learned map information are subject to systematic distortion errors. Map elements that divide a map surface into regions, such as content-related linear symbols (e.g. streets, rivers, railway systems) or additional artificial layers (coordinate grids), provide an orientation pattern that can help users to reduce distortions in their mental representations. In recent years, the television industry has started to establish True-3D (autostereoscopic) displays as mass media. These modern displays make it possible to watch dynamic and static images including depth illusions without additional devices, such as 3D glasses. In these images, visual details can be distributed over different positions along the depth axis. Some empirical studies of vision research provided first evidence that 3D stereoscopic content attracts higher attention and is processed faster. So far, the impact of True-3D accentuating has not yet been explored concerning spatial memory tasks and cartography. This paper reports the results of two empirical studies that focus on investigations whether True-3D accentuating of artificial, regular overlaying line features (i.e. grids) and content-related, irregular line features (i.e. highways and main streets) in official urban topographic maps (scale 1/10,000) further improves human object location memory performance. The memory performance is measured as both the percentage of correctly recalled object locations (hit rate) and the mean distances of correctly recalled objects (spatial accuracy). It is shown that the True-3D accentuating of grids (depth offset: 5 cm) significantly enhances the spatial accuracy of recalled map object locations, whereas the True-3D emphasis of streets significantly improves the hit rate of recalled map object locations. These results show the potential of True-3D displays for an improvement of the cognitive representation of learned cartographic information.

  6. STORM: A STatistical Object Representation Model

    SciTech Connect

    Rafanelli, M. ); Shoshani, A. )

    1989-11-01

    In this paper we explore the structure and semantic properties of the entities stored in statistical databases. We call such entities statistical objects'' (SOs) and propose a new statistical object representation model,'' based on a graph representation. We identify a number of SO representational problems in current models and propose a methodology for their solution. 11 refs.

  7. Combining depth and color data for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Joergensen, Thomas M.; Linneberg, Christian; Andersen, Allan W.

    1997-09-01

    This paper describes the shape recognition system that has been developed within the ESPRIT project 9052 ADAS on automatic disassembly of TV-sets using a robot cell. Depth data from a chirped laser radar are fused with color data from a video camera. The sensor data is pre-processed in several ways and the obtained representation is used to train a RAM neural network (memory based reasoning approach) to detect different components within TV-sets. The shape recognizing architecture has been implemented and tested in a demonstration setup.

  8. A Taxonomy of 3D Occluded Objects Recognition Techniques

    NASA Astrophysics Data System (ADS)

    Soleimanizadeh, Shiva; Mohamad, Dzulkifli; Saba, Tanzila; Al-ghamdi, Jarallah Saleh

    2016-03-01

    The overall performances of object recognition techniques under different condition (e.g., occlusion, viewpoint, and illumination) have been improved significantly in recent years. New applications and hardware are shifted towards digital photography, and digital media. This faces an increase in Internet usage requiring object recognition for certain applications; particularly occulded objects. However occlusion is still an issue unhandled, interlacing the relations between extracted feature points through image, research is going on to develop efficient techniques and easy to use algorithms that would help users to source images; this need to overcome problems and issues regarding occlusion. The aim of this research is to review recognition occluded objects algorithms and figure out their pros and cons to solve the occlusion problem features, which are extracted from occluded object to distinguish objects from other co-existing objects by determining the new techniques, which could differentiate the occluded fragment and sections inside an image.

  9. 3D face recognition based on multiple keypoint descriptors and sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  10. 3D Face Recognition Based on Multiple Keypoint Descriptors and Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  11. An optimal sensing strategy for recognition and localization of 3-D natural quadric objects

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan; Hahn, Hernsoo

    1991-01-01

    An optimal sensing strategy for an optical proximity sensor system engaged in the recognition and localization of 3-D natural quadric objects is presented. The optimal sensing strategy consists of the selection of an optimal beam orientation and the determination of an optimal probing plane that compose an optimal data collection operation known as an optimal probing. The decision of an optimal probing is based on the measure of discrimination power of a cluster of surfaces on a multiple interpretation image (MII), where the measure of discrimination power is defined in terms of a utility function computing the expected number of interpretations that can be pruned out by a probing. An object representation suitable for active sensing based on a surface description vector (SDV) distribution graph and hierarchical tables is presented. Experimental results are shown.

  12. Frio, Yegua objectives of E. Texas 3D seismic

    SciTech Connect

    1996-07-01

    Houston companies plan to explore deeper formations along the Sabine River on the Texas and Louisiana Gulf Coast. PetroGuard Co. Inc. and Jebco Seismic Inc., Houston, jointly secured a seismic and leasing option from Hankamer family et al. on about 120 sq miles in Newton County, Tex., and Calcasieu Parish, La. PetroGuard, which specializes in oilfield rehabilitation, has production experience in the area. Historic production in the area spans three major geologic trends: Oligocene Frio/Hackberry, downdip and mid-dip Eocene Yegua, and Eocene Wilcox. In the southern part of the area, to be explored first, the trends lie at 9,000--10,000 ft, 10,000--12,000 ft, and 14,000--15,000 ft, respectively. Output Exploration Co., an affiliate of Input/Output Inc., Houston, acquired from PetroGuard and Jebco all exploratory drilling rights in the option area. Output will conduct 3D seismic operations over nearly half the acreage this summer. Data acquisition started late this spring. Output plans to use a combination of a traditional land recording system and I/O`s new RSR 24 bit radio telemetry system because the area spans environments from dry land to swamp.

  13. 3D object detection from roadside data using laser scanners

    NASA Astrophysics Data System (ADS)

    Tang, Jimmy; Zakhor, Avideh

    2011-03-01

    The detection of objects on a given road path by vehicles equipped with range measurement devices is important to many civilian and military applications such as obstacle avoidance in autonomous navigation systems. In this thesis, we develop a method to detect objects of a specific size lying on a road using an acquisition vehicle equipped with forward looking Light Detection And Range (LiDAR) sensors and inertial navigation system. We use GPS data to accurately place the LiDAR points in a world map, extract point cloud clusters protruding from the road, and detect objects of interest using weighted random forest trees. We show that our proposed method is effective in identifying objects for several road datasets collected with various object locations and vehicle speeds.

  14. Profile of students' comprehension of 3D molecule representation and its interconversion on chirality

    NASA Astrophysics Data System (ADS)

    Setyarini, M.; Liliasari, Kadarohman, Asep; Martoprawiro, Muhamad A.

    2016-02-01

    This study aims at describing (1) students' level comprehension; (2) factors causing difficulties to 3D comprehend molecule representation and its interconversion on chirality. Data was collected using multiple-choice test consisting of eight questions. The participants were required to give answers along with their reasoning. The test was developed based on the indicators of concept comprehension. The study was conducted to 161 college students enrolled in stereochemistry topic in the odd semester (2014/2015) from two LPTK (teacher training institutes) in Bandar Lampung and Gorontalo, and one public university in Bandung. The result indicates that college students' level of comprehension towards 3D molecule representations and its inter-conversion was 5% on high level, 22 % on the moderate level, and 73 % on the low level. The dominant factors identified as the cause of difficulties to comprehend 3D molecule representation and its interconversion were (i) the lack of spatial awareness, (ii) violation of absolute configuration determination rules, (iii) imprecise placement of observers, (iv) the lack of rotation operation, and (v) the lack of understanding of correlation between the representations. This study recommends that learning show more rigorous spatial awareness training tasks accompanied using dynamic visualization media of molecules associated. Also students learned using static molecular models can help them overcome their difficulties encountered.

  15. Scale Space Graph Representation and Kernel Matching for Non Rigid and Textured 3D Shape Retrieval.

    PubMed

    Garro, Valeria; Giachetti, Andrea

    2016-06-01

    In this paper we introduce a novel framework for 3D object retrieval that relies on tree-based shape representations (TreeSha) derived from the analysis of the scale-space of the Auto Diffusion Function (ADF) and on specialized graph kernels designed for their comparison. By coupling maxima of the Auto Diffusion Function with the related basins of attraction, we can link the information at different scales encoding spatial relationships in a graph description that is isometry invariant and can easily incorporate texture and additional geometrical information as node and edge features. Using custom graph kernels it is then possible to estimate shape dissimilarities adapted to different specific tasks and on different categories of models, making the procedure a powerful and flexible tool for shape recognition and retrieval. Experimental results demonstrate that the method can provide retrieval scores similar or better than state-of-the-art on textured and non textured shape retrieval benchmarks and give interesting insights on effectiveness of different shape descriptors and graph kernels.

  16. Prototyping a Sensor Enabled 3d Citymodel on Geospatial Managed Objects

    NASA Astrophysics Data System (ADS)

    Kjems, E.; Kolář, J.

    2013-09-01

    One of the major development efforts within the GI Science domain are pointing at sensor based information and the usage of real time information coming from geographic referenced features in general. At the same time 3D City models are mostly justified as being objects for visualization purposes rather than constituting the foundation of a geographic data representation of the world. The combination of 3D city models and real time information based systems though can provide a whole new setup for data fusion within an urban environment and provide time critical information preserving our limited resources in the most sustainable way. Using 3D models with consistent object definitions give us the possibility to avoid troublesome abstractions of reality, and design even complex urban systems fusing information from various sources of data. These systems are difficult to design with the traditional software development approach based on major software packages and traditional data exchange. The data stream is varying from urban domain to urban domain and from system to system why it is almost impossible to design a complete system taking care of all thinkable instances now and in the future within one constraint software design complex. On several occasions we have been advocating for a new end advanced formulation of real world features using the concept of Geospatial Managed Objects (GMO). This paper presents the outcome of the InfraWorld project, a 4 million Euro project financed primarily by the Norwegian Research Council where the concept of GMO's have been applied in various situations on various running platforms of an urban system. The paper will be focusing on user experiences and interfaces rather then core technical and developmental issues. The project was primarily focusing on prototyping rather than realistic implementations although the results concerning applicability are quite clear.

  17. Recognition of Simple 3D Geometrical Objects under Partial Occlusion

    NASA Astrophysics Data System (ADS)

    Barchunova, Alexandra; Sommer, Gerald

    In this paper we present a novel procedure for contour-based recognition of partially occluded three-dimensional objects. In our approach we use images of real and rendered objects whose contours have been deformed by a restricted change of the viewpoint. The preparatory part consists of contour extraction, preprocessing, local structure analysis and feature extraction. The main part deals with an extended construction and functionality of the classifier ensemble Adaptive Occlusion Classifier (AOC). It relies on a hierarchical fragmenting algorithm to perform a local structure analysis which is essential when dealing with occlusions. In the experimental part of this paper we present classification results for five classes of simple geometrical figures: prism, cylinder, half cylinder, a cube, and a bridge. We compare classification results for three classical feature extractors: Fourier descriptors, pseudo Zernike and Zernike moments.

  18. Surface gloss and color perception of 3D objects

    PubMed Central

    Xiao, Bei; Brainard, David H.

    2008-01-01

    Two experiments explore the color perception of objects in complex scenes. The first experiment examines the color perception of objects across variation in surface gloss. Observers adjusted the color appearance of a matte sphere to match that of a test sphere. Across conditions we varied the body color and glossiness of the test sphere. The data indicate that observers do not simply match the average light reflected from the test. Indeed, the visual system compensates for the physical effect of varying the gloss, so that appearance is stabilized relative to what is predicted by the spatial average. The second experiment examines how people perceive color across locations on an object. We replaced the test sphere with a soccer ball that had one of its hexagonal faces colored. Observers were asked to adjust the match sphere have the same color appearance as this test patch. The test patch could be located at either an upper or lower location on the soccer ball. In addition, we varied the surface gloss of the entire soccer ball (including the test patch). The data show that there is an effect of test patch location on observers’ color matching, but this effect is small compared to the physical change in the average light reflected from the test patch across the two locations. In addition, the effect of glossy highlights on the color appearance of the test patch was consistent with the results from Experiment 1. PMID:18598406

  19. Parts, Cavities, and Object Representation in Infancy

    ERIC Educational Resources Information Center

    Hayden, Angela; Bhatt, Ramesh S.; Kangas, Ashley; Zieber, Nicole

    2011-01-01

    Part representation is not only critical to object perception but also plays a key role in a number of basic visual cognition functions, such as figure-ground segregation, allocation of attention, and memory for shapes. Yet, virtually nothing is known about the development of part representation. If parts are fundamental components of object shape…

  20. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    NASA Astrophysics Data System (ADS)

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  1. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    PubMed Central

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-01-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images. PMID:26980176

  2. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation.

    PubMed

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-16

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  3. 3D reconstruction based on multiple views for close-range objects

    NASA Astrophysics Data System (ADS)

    Ji, Zheng; Zhang, Jianqing

    2007-06-01

    It is difficult for traditional photogrammetry techniques to reconstruct 3D model of close-range objects. To overcome the restriction and realize complex objects' 3D reconstruction, we present a realistic approach on the basis of multi-baseline stereo vision. This incorporates the image matching based on short-baseline-multi-views, and 3D measurement based on multi-ray intersection, and the 3D reconstruction of the object's based on TIN or parametric geometric model. Different complex object are reconstructed by this way. The results demonstrate the feasibility and effectivity of the method.

  4. OB3D, a new set of 3D objects available for research: a web-based study

    PubMed Central

    Buffat, Stéphane; Chastres, Véronique; Bichot, Alain; Rider, Delphine; Benmussa, Frédéric; Lorenceau, Jean

    2014-01-01

    Studying object recognition is central to fundamental and clinical research on cognitive functions but suffers from the limitations of the available sets that cannot always be modified and adapted to meet the specific goals of each study. We here present a new set of 3D scans of real objects available on-line as ASCII files, OB3D. These files are lists of dots, each defined by a triplet of spatial coordinates and their normal that allow simple and highly versatile transformations and adaptations. We performed a web-based experiment to evaluate the minimal number of dots required for the denomination and categorization of these objects, thus providing a reference threshold. We further analyze several other variables derived from this data set, such as the correlations with object complexity. This new stimulus set, which was found to activate the Lower Occipital Complex (LOC) in another study, may be of interest for studies of cognitive functions in healthy participants and patients with cognitive impairments, including visual perception, language, memory, etc. PMID:25339920

  5. Combining laser scan and photogrammetry for 3D object modeling using a single digital camera

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Zhang, Hong; Zhang, Xiangwei

    2009-07-01

    In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Laser scan and photogrammetry are two main methods to be used. For laser scan, a video camera and a laser source are necessary, and for photogrammetry, a digital still camera with high resolution pixels is indispensable. In some 3D modeling tasks, two methods are often integrated to get satisfactory results. Although many research works have been done on how to combine the results of the two methods, no work has been reported to design an integrated device at low cost. In this paper, a new 3D scan system combining laser scan and photogrammetry using a single consumer digital camera is proposed. Nowadays there are many consumer digital cameras, such as Canon EOS 5D Mark II, they usually have features of more than 10M pixels still photo recording and full 1080p HD movie recording, so a integrated scan system can be designed using such a camera. A square plate glued with coded marks is used to place the 3d objects, and two straight wood rulers also glued with coded marks can be laid on the plate freely. In the photogrammetry module, the coded marks on the plate make up a world coordinate and can be used as control network to calibrate the camera, and the planes of two rulers can also be determined. The feature points of the object and the rough volume representation from the silhouettes can be obtained in this module. In the laser scan module, a hand-held line laser is used to scan the object, and the two straight rulers are used as reference planes to determine the position of the laser. The laser scan results in dense points cloud which can be aligned together automatically through calibrated camera parameters. The final complete digital model is obtained through a new a patchwise energy functional method by fusion of the feature points, rough volume and the dense points cloud. The design

  6. 3D representations of amino acids—applications to protein sequence comparison and classification

    PubMed Central

    Li, Jie; Koehl, Patrice

    2014-01-01

    The amino acid sequence of a protein is the key to understanding its structure and ultimately its function in the cell. This paper addresses the fundamental issue of encoding amino acids in ways that the representation of such a protein sequence facilitates the decoding of its information content. We show that a feature-based representation in a three-dimensional (3D) space derived from amino acid substitution matrices provides an adequate representation that can be used for direct comparison of protein sequences based on geometry. We measure the performance of such a representation in the context of the protein structural fold prediction problem. We compare the results of classifying different sets of proteins belonging to distinct structural folds against classifications of the same proteins obtained from sequence alone or directly from structural information. We find that sequence alone performs poorly as a structure classifier. We show in contrast that the use of the three dimensional representation of the sequences significantly improves the classification accuracy. We conclude with a discussion of the current limitations of such a representation and with a description of potential improvements. PMID:25379143

  7. Reducing Non-Uniqueness in Satellite Gravity Inversion using 3D Object Oriented Image Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Fadel, I.; van der Meijde, M.; Kerle, N.

    2013-12-01

    Non-uniqueness of satellite gravity interpretation has been usually reduced by using a priori information from various sources, e.g. seismic tomography models. The reduction in non-uniqueness has been based on velocity-density conversion formulas or user interpretation for 3D subsurface structures (objects) in seismic tomography models. However, these processes introduce additional uncertainty through the conversion relations due to the dependency on the other physical parameters such as temperature and pressure, or through the bias in the interpretation due to user choices and experience. In this research, a new methodology is introduced to extract the 3D subsurface structures from 3D geophysical data using a state-of-art 3D Object Oriented Image Analysis (OOA) technique. 3D OOA is tested using a set of synthetic models that simulate the real situation in the study area of this research. Then, 3D OOA is used to extract 3D subsurface objects from a real 3D seismic tomography model. The extracted 3D objects are used to reconstruct a forward model and its response is compared with the measured satellite gravity. Finally, the result of the forward modelling, based on the extracted 3D objects, is used to constrain the inversion process of satellite gravity data. Through this work, a new object-based approach is introduced to interpret and extract the 3D subsurface objects from 3D geophysical data. This can be used to constrain modelling and inversion of potential field data using the extracted 3D subsurface structures from other methods. In summary, a new approach is introduced to constrain inversion of satellite gravity measurements and enhance interpretation capabilities.

  8. A Novel Multi-Purpose Matching Representation of Local 3D Surfaces: A Rotationally Invariant, Efficient, and Highly Discriminative Approach With an Adjustable Sensitivity.

    PubMed

    Al-Osaimi, Faisal R

    2016-02-01

    In this paper, a novel approach to local 3D surface matching representation suitable for a range of 3D vision applications is introduced. Local 3D surface patches around key points on the 3D surface are represented by 2D images such that the representing 2D images enjoy certain characteristics which positively impact the matching accuracy, robustness, and speed. First, the proposed representation is complete, in the sense, there is no information loss during their computation. Second, the 3DoF 2D representations are strictly invariant to all the 3DoF rotations. To optimally avail surface information, the sensitivity of the representations to surface information is adjustable. This also provides the proposed matching representation with the means to optimally adjust to a particular class of problems/applications or an acquisition technology. Each 2D matching representation is a sequence of adjustable integral kernels, where each kernel is efficiently computed from a triple of precise 3D curves (profiles) formed by intersecting three concentric spheres with the 3D surface. Robust techniques for sampling the profiles and establishing correspondences among them were devised. Based on the proposed matching representation, two techniques for the detection of key points were presented. The first is suitable for static images, while the second is suitable for 3D videos. The approach was tested on the face recognition grand challenge v2.0, the 3D twins expression challenge, and the Bosphorus data sets, and a superior face recognition performance was achieved. In addition, the proposed approach was used in object class recognition and tested on a Kinect data set. PMID:26513787

  9. A Novel Multi-Purpose Matching Representation of Local 3D Surfaces: A Rotationally Invariant, Efficient, and Highly Discriminative Approach With an Adjustable Sensitivity.

    PubMed

    Al-Osaimi, Faisal R

    2016-02-01

    In this paper, a novel approach to local 3D surface matching representation suitable for a range of 3D vision applications is introduced. Local 3D surface patches around key points on the 3D surface are represented by 2D images such that the representing 2D images enjoy certain characteristics which positively impact the matching accuracy, robustness, and speed. First, the proposed representation is complete, in the sense, there is no information loss during their computation. Second, the 3DoF 2D representations are strictly invariant to all the 3DoF rotations. To optimally avail surface information, the sensitivity of the representations to surface information is adjustable. This also provides the proposed matching representation with the means to optimally adjust to a particular class of problems/applications or an acquisition technology. Each 2D matching representation is a sequence of adjustable integral kernels, where each kernel is efficiently computed from a triple of precise 3D curves (profiles) formed by intersecting three concentric spheres with the 3D surface. Robust techniques for sampling the profiles and establishing correspondences among them were devised. Based on the proposed matching representation, two techniques for the detection of key points were presented. The first is suitable for static images, while the second is suitable for 3D videos. The approach was tested on the face recognition grand challenge v2.0, the 3D twins expression challenge, and the Bosphorus data sets, and a superior face recognition performance was achieved. In addition, the proposed approach was used in object class recognition and tested on a Kinect data set.

  10. Whole versus Part Presentations of the Interactive 3D Graphics Learning Objects

    ERIC Educational Resources Information Center

    Azmy, Nabil Gad; Ismaeel, Dina Ahmed

    2010-01-01

    The purpose of this study is to present an analysis of how the structure and design of the Interactive 3D Graphics Learning Objects can be effective and efficient in terms of Performance, Time on task, and Learning Efficiency. The study explored two treatments, namely whole versus Part Presentations of the Interactive 3D Graphics Learning Objects,…

  11. The Representation of Cultural Heritage from Traditional Drawing to 3d Survey: the Case Study of Casamary's Abbey

    NASA Astrophysics Data System (ADS)

    Canciani, M.; Saccone, M.

    2016-06-01

    In 3D survey the aspects most discussed in the scientific community are those related to the acquisition of data from integrated survey (laser scanner, photogrammetric, topographic and traditional direct), rather than those relating to the interpretation of the data. Yet in the methods of traditional representation, the data interpretation, such as that of the philological reconstruction, constitutes the most important aspect. It is therefore essential in modern systems of survey and representation, filter the information acquired. In the system, based on the integrated survey that we have adopted, the 3D object, characterized by a cloud of georeferenced points, defined but their color values, defines the core of the elaboration. It allows to carry out targeted analysis, using section planes as a tool of selection and filtering data, comparable with those of traditional drawings. In the case study of the Abbey of Casamari (Veroli), one of the most important Cistercian Settlement in Italy, the survey made for an Agreement with the Ministry of Cultural Heritage and Activities and Tourism (MiBACT) and University of RomaTre, within the project "Accessment of the sismic safety of the state museum", the reference 3D model, consisting of the superposition and geo-references data from various surveys, is the tool with which yo develop representative models comparable to traditional ones. It provides the necessary spatial environment for drawing up plans and sections with a definition such as to develop thematic analysis related to phases of construction, state of deterioration and structural features.

  12. Modeling late rectal toxicities based on a parameterized representation of the 3D dose distribution

    NASA Astrophysics Data System (ADS)

    Buettner, Florian; Gulliford, Sarah L.; Webb, Steve; Partridge, Mike

    2011-04-01

    Many models exist for predicting toxicities based on dose-volume histograms (DVHs) or dose-surface histograms (DSHs). This approach has several drawbacks as firstly the reduction of the dose distribution to a histogram results in the loss of spatial information and secondly the bins of the histograms are highly correlated with each other. Furthermore, some of the complex nonlinear models proposed in the past lack a direct physical interpretation and the ability to predict probabilities rather than binary outcomes. We propose a parameterized representation of the 3D distribution of the dose to the rectal wall which explicitly includes geometrical information in the form of the eccentricity of the dose distribution as well as its lateral and longitudinal extent. We use a nonlinear kernel-based probabilistic model to predict late rectal toxicity based on the parameterized dose distribution and assessed its predictive power using data from the MRC RT01 trial (ISCTRN 47772397). The endpoints under consideration were rectal bleeding, loose stools, and a global toxicity score. We extract simple rules identifying 3D dose patterns related to a specifically low risk of complication. Normal tissue complication probability (NTCP) models based on parameterized representations of geometrical and volumetric measures resulted in areas under the curve (AUCs) of 0.66, 0.63 and 0.67 for predicting rectal bleeding, loose stools and global toxicity, respectively. In comparison, NTCP models based on standard DVHs performed worse and resulted in AUCs of 0.59 for all three endpoints. In conclusion, we have presented low-dimensional, interpretable and nonlinear NTCP models based on the parameterized representation of the dose to the rectal wall. These models had a higher predictive power than models based on standard DVHs and their low dimensionality allowed for the identification of 3D dose patterns related to a low risk of complication.

  13. 3d Modeling of cultural heritage objects with a structured light system.

    NASA Astrophysics Data System (ADS)

    Akca, Devrim

    3D modeling of cultural heritage objects is an expanding application area. The selection of the right technology is very important and strictly related to the project requirements, budget and user's experience. The triangulation based active sensors, e.g. structured light systems are used for many kinds of 3D object reconstruction tasks and in particular for 3D recording of cultural heritage objects. This study presents the experiences in the results of two such projects in which a close-range structured light system is used for the 3D digitization. The paper includes the essential steps of the 3D object modeling pipeline, i.e. digitization, registration, surface triangulation, editing, texture mapping and visualization. The capabilities of the used hardware and software are addressed. Particular emphasis is given to a coded structured light system as an option for data acquisition.

  14. Spherical blurred shape model for 3-D object and pose recognition: quantitative analysis and HCI applications in smart environments.

    PubMed

    Lopes, Oscar; Reyes, Miguel; Escalera, Sergio; Gonzàlez, Jordi

    2014-12-01

    The use of depth maps is of increasing interest after the advent of cheap multisensor devices based on structured light, such as Kinect. In this context, there is a strong need of powerful 3-D shape descriptors able to generate rich object representations. Although several 3-D descriptors have been already proposed in the literature, the research of discriminative and computationally efficient descriptors is still an open issue. In this paper, we propose a novel point cloud descriptor called spherical blurred shape model (SBSM) that successfully encodes the structure density and local variabilities of an object based on shape voxel distances and a neighborhood propagation strategy. The proposed SBSM is proven to be rotation and scale invariant, robust to noise and occlusions, highly discriminative for multiple categories of complex objects like the human hand, and computationally efficient since the SBSM complexity is linear to the number of object voxels. Experimental evaluation in public depth multiclass object data, 3-D facial expressions data, and a novel hand poses data sets show significant performance improvements in relation to state-of-the-art approaches. Moreover, the effectiveness of the proposal is also proved for object spotting in 3-D scenes and for real-time automatic hand pose recognition in human computer interaction scenarios. PMID:25415944

  15. 3D video analysis of the novel object recognition test in rats.

    PubMed

    Matsumoto, Jumpei; Uehara, Takashi; Urakawa, Susumu; Takamura, Yusaku; Sumiyoshi, Tomiki; Suzuki, Michio; Ono, Taketoshi; Nishijo, Hisao

    2014-10-01

    The novel object recognition (NOR) test has been widely used to test memory function. We developed a 3D computerized video analysis system that estimates nose contact with an object in Long Evans rats to analyze object exploration during NOR tests. The results indicate that the 3D system reproducibly and accurately scores the NOR test. Furthermore, the 3D system captures a 3D trajectory of the nose during object exploration, enabling detailed analyses of spatiotemporal patterns of object exploration. The 3D trajectory analysis revealed a specific pattern of object exploration in the sample phase of the NOR test: normal rats first explored the lower parts of objects and then gradually explored the upper parts. A systematic injection of MK-801 suppressed changes in these exploration patterns. The results, along with those of previous studies, suggest that the changes in the exploration patterns reflect neophobia to a novel object and/or changes from spatial learning to object learning. These results demonstrate that the 3D tracking system is useful not only for detailed scoring of animal behaviors but also for investigation of characteristic spatiotemporal patterns of object exploration. The system has the potential to facilitate future investigation of neural mechanisms underlying object exploration that result from dynamic and complex brain activity. PMID:24991752

  16. Visual Short-Term Memory Benefit for Objects on Different 3-D Surfaces

    ERIC Educational Resources Information Center

    Xu, Yaoda; Nakayama, Ken

    2007-01-01

    Visual short-term memory (VSTM) plays an important role in visual cognition. Although objects are located on different 3-dimensional (3-D) surfaces in the real world, how VSTM capacity may be influenced by the presence of multiple 3-D surfaces has never been examined. By manipulating binocular disparities of visual displays, the authors found that…

  17. 3D polygonal representation of dense point clouds by triangulation, segmentation, and texture projection

    NASA Astrophysics Data System (ADS)

    Tajbakhsh, Touraj

    2010-02-01

    A basic concern of computer graphic is the modeling and realistic representation of three-dimensional objects. In this paper we present our reconstruction framework which determines a polygonal surface from a set of dense points such those typically obtained from laser scanners. We deploy the concept of adaptive blobs to achieve a first volumetric representation of the object. In the next step we estimate a coarse surface using the marching cubes method. We propose to deploy a depth-first search segmentation algorithm traversing a graph representation of the obtained polygonal mesh in order to identify all connected components. A so called supervised triangulation maps the coarse surfaces onto the dense point cloud. We optimize the mesh topology using edge exchange operations. For photo-realistic visualization of objects we finally synthesize optimal low-loss textures from available scene captures of different projections. We evaluate our framework on artificial data as well as real sensed data.

  18. 3D object recognition using kernel construction of phase wrapped images

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Su, Hongjun

    2011-06-01

    Kernel methods are effective machine learning techniques for many image based pattern recognition problems. Incorporating 3D information is useful in such applications. The optical profilometries and interforometric techniques provide 3D information in an implicit form. Typically phase unwrapping process, which is often hindered by the presence of noises, spots of low intensity modulation, and instability of the solutions, is applied to retrieve the proper depth information. In certain applications such as pattern recognition problems, the goal is to classify the 3D objects in the image, rather than to simply display or reconstruct them. In this paper we present a technique for constructing kernels on the measured data directly without explicit phase unwrapping. Such a kernel will naturally incorporate the 3D depth information and can be used to improve the systems involving 3D object analysis and classification.

  19. Separating the Representation from the Science: Training Students in Comprehending 3D Diagrams

    NASA Astrophysics Data System (ADS)

    Bemis, K. G.; Silver, D.; Chiang, J.; Halpern, D.; Oh, K.; Tremaine, M.

    2011-12-01

    Studies of students taking first year geology and earth science courses at universities find that a remarkable number of them are confused by the three-dimensional representations used to explain the science [1]. Comprehension of these 3D representations has been found to be related to an individual's spatial ability [2]. A variety of interactive programs and animations have been created to help explain the diagrams to beginning students [3, 4]. This work has demonstrated comprehension improvement and removed a gender gap between male (high spatial) and female (low spatial) students [5]. However, not much research has examined what makes the 3D diagrams so hard to understand or attempted to build a theory for creating training designed to remove these difficulties. Our work has separated the science labeling and comprehension of the diagrams from the visualizations to examine how individuals mentally see the visualizations alone. In particular, we asked subjects to create a cross-sectional drawing of the internal structure of various 3D diagrams. We found that viewing planes (the coordinate system the designer applies to the diagram), cutting planes (the planes formed by the requested cross sections) and visual property planes (the planes formed by the prominent features of the diagram, e.g., a layer at an angle of 30 degrees to the top surface of the diagram) that deviated from a Cartesian coordinate system imposed by the viewer caused significant problems for subjects, in part because these deviations forced them to mentally re-orient their viewing perspective. Problems with deviations in all three types of plane were significantly harder than those deviating on one or two planes. Our results suggest training that does not focus on showing how the components of various 3D geologic formations are put together but rather training that guides students in re-orienting themselves to deviations that differ from their right-angle view of the world, e.g., by showing how

  20. Electrophysiological evidence of separate pathways for the perception of depth and 3D objects.

    PubMed

    Gao, Feng; Cao, Bihua; Cao, Yunfei; Li, Fuhong; Li, Hong

    2015-05-01

    Previous studies have investigated the neural mechanism of 3D perception, but the neural distinction between 3D-objects and depth processing remains unclear. In the present study, participants viewed three types of graphics (planar graphics, perspective drawings, and 3D objects) while event-related potentials (ERP) were recorded. The ERP results revealed the following: (1) 3D objects elicited a larger and delayed N1 component than the other two types of stimuli; (2) during the P2 time window, significant differences between 3D objects and the perspective drawings were found mainly over a group of electrode sites in the left lateral occipital region; and (3) during the N2 complex, differences between planar graphics and perspective drawings were found over a group of electrode sites in the right hemisphere, whereas differences between perspective drawings and 3D objects were observed at another group of electrode sites in the left hemisphere. These findings support the claim that depth processing and object identification might be processed by separate pathways and at different latencies.

  1. Compact encoding of 3-D voxel surfaces based on pattern code representation.

    PubMed

    Kim, Chang-Su; Lee, Sang-Uk

    2002-01-01

    In this paper, we propose a lossless compression algorithm for three-dimensional (3-D) binary voxel surfaces, based on the pattern code representation (PCR). In PCR, a voxel surface is represented by a series of pattern codes. The pattern of a voxel v is defined as the 3 x 3 x 3 array of voxels, centered on v. Therefore, the pattern code for informs of the local shape of the voxel surface around . The proposed algorithm can achieve the coding gain, since the patterns of adjacent voxels are highly correlated to each other. The performance of the proposed algorithm is evaluated using various voxel surfaces, which are scan-converted from triangular mesh models. It is shown that the proposed algorithm requires only 0.5 approximately 1 bits per black voxel (bpbv) to store or transmit the voxel surfaces.

  2. 3D models automatic reconstruction of selected close range objects. (Polish Title: Automatyczna rekonstrukcja modeli 3D małych obiektów bliskiego zasiegu)

    NASA Astrophysics Data System (ADS)

    Zaweiska, D.

    2013-12-01

    Reconstruction of three-dimensional, realistic models of objects from digital images has been the topic of research in many areas of science for many years. This development is stimulated by new technologies and tools, which appeared recently, such as digital photography, laser scanners, increase in the equipment efficiency and Internet. The objective of this paper is to present results of automatic modeling of selected close range objects, with the use of digital photographs acquired by the Hasselblad H4D50 camera. The author's software tool was utilized for calculations; it performs successive stages of the 3D model creation. The modeling process was presented as the complete process which starts from acquisition of images and which is completed by creation of a photorealistic 3D model in the same software environment. Experiments were performed for selected close range objects, with appropriately arranged image geometry, creating a ring around the measured object. The Area Base Matching (CC/LSM) method, the RANSAC algorithm, with the use of tensor calculus, were utilized form automatic matching of points detected with the SUSAN algorithm. Reconstruction of the surface of model generation is one of the important stages of 3D modeling. Reconstruction of precise surfaces, performed on the basis of a non-organized cloud of points, acquired from automatic processing of digital images, is a difficult task, which has not been finally solved. Creation of poly-angular models, which may meet high requirements concerning modeling and visualization is required in many applications. The polynomial method is usually the best way to precise representation of measurement results, and, at the same time, to achieving the optimum description of the surface. Three algorithm were tested: the volumetric method (VCG), the Poisson method and the Ball pivoting method. Those methods are mostly applied to modeling of uniform grids of points. Results of experiments proved that incorrect

  3. Average Cross-Sectional Area of DebriSat Fragments Using Volumetrically Constructed 3D Representations

    NASA Technical Reports Server (NTRS)

    Scruggs, T.; Moraguez, M.; Patankar, K.; Fitz-Coy, N.; Liou, J.-C.; Sorge, M.; Huynh, T.

    2016-01-01

    Debris fragments from the hypervelocity impact testing of DebriSat are being collected and characterized for use in updating existing satellite breakup models. One of the key parameters utilized in these models is the ballistic coefficient of the fragment which is directly related to its area-to-mass ratio. However, since the attitude of fragments varies during their orbital lifetime, it is customary to use the average cross-sectional area in the calculation of the area-to-mass ratio. The average cross-sectional area is defined as the average of the projected surface areas perpendicular to the direction of motion and has been shown to be equal to one-fourth of the total surface area of a convex object. Unfortunately, numerous fragments obtained from the DebriSat experiment show significant concavity (i.e., shadowing) and thus we have explored alternate methods for computing the average cross-sectional area of the fragments. An imaging system based on the volumetric reconstruction of a 3D object from multiple 2D photographs of the object was developed for use in determining the size characteristic (i.e., characteristics length) of the DebriSat fragments. For each fragment, the imaging system generates N number of images from varied azimuth and elevation angles and processes them using a space-carving algorithm to construct a 3D point cloud of the fragment. This paper describes two approaches for calculating the average cross-sectional area of debris fragments based on the 3D imager. Approach A utilizes the constructed 3D object to generate equally distributed cross-sectional area projections and then averages them to determine the average cross-sectional area. Approach B utilizes a weighted average of the area of the 2D photographs to directly compute the average cross-sectional area. A comparison of the accuracy and computational needs of each approach is described as well as preliminary results of an analysis to determine the "optimal" number of images needed for

  4. Improving low-dose cardiac CT images using 3D sparse representation based processing

    NASA Astrophysics Data System (ADS)

    Shi, Luyao; Chen, Yang; Luo, Limin

    2015-03-01

    Cardiac computed tomography (CCT) has been widely used in diagnoses of coronary artery diseases due to the continuously improving temporal and spatial resolution. When helical CT with a lower pitch scanning mode is used, the effective radiation dose can be significant when compared to other radiological exams. Many methods have been developed to reduce radiation dose in coronary CT exams including high pitch scans using dual source CT scanners and step-and-shot scanning mode for both single source and dual source CT scanners. Additionally, software methods have also been proposed to reduce noise in the reconstructed CT images and thus offering the opportunity to reduce radiation dose while maintaining the desired diagnostic performance of a certain imaging task. In this paper, we propose that low-dose scans should be considered in order to avoid the harm from accumulating unnecessary X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. Accordingly, in this paper, a 3D dictionary representation based image processing method is proposed to reduce CT image noise. Information on both spatial and temporal structure continuity is utilized in sparse representation to improve the performance of the image processing method. Clinical cases were used to validate the proposed method.

  5. Sparse Representation of Deformable 3D Organs with Spherical Harmonics and Structured Dictionary

    PubMed Central

    Wang, Dan; Tewfik, Ahmed H.; Zhang, Yingchun; Shen, Yunhe

    2011-01-01

    This paper proposed a novel algorithm to sparsely represent a deformable surface (SRDS) with low dimensionality based on spherical harmonic decomposition (SHD) and orthogonal subspace pursuit (OSP). The key idea in SRDS method is to identify the subspaces from a training data set in the transformed spherical harmonic domain and then cluster each deformation into the best-fit subspace for fast and accurate representation. This algorithm is also generalized into applications of organs with both interior and exterior surfaces. To test the feasibility, we first use the computer models to demonstrate that the proposed approach matches the accuracy of complex mathematical modeling techniques and then both ex vivo and in vivo experiments are conducted using 3D magnetic resonance imaging (MRI) scans for verification in practical settings. All results demonstrated that the proposed algorithm features sparse representation of deformable surfaces with low dimensionality and high accuracy. Specifically, the precision evaluated as maximum error distance between the reconstructed surface and the MRI ground truth is better than 3 mm in real MRI experiments. PMID:21941524

  6. Segmentation of 3D tubular objects with adaptive front propagation and minimal tree extraction for 3D medical imaging.

    PubMed

    Cohen, Laurent D; Deschamps, Thomas

    2007-08-01

    We present a new fast approach for segmentation of thin branching structures, like vascular trees, based on Fast-Marching (FM) and Level Set (LS) methods. FM allows segmentation of tubular structures by inflating a "long balloon" from a user given single point. However, when the tubular shape is rather long, the front propagation may blow up through the boundary of the desired shape close to the starting point. Our contribution is focused on a method to propagate only the useful part of the front while freezing the rest of it. We demonstrate its ability to segment quickly and accurately tubular and tree-like structures. We also develop a useful stopping criterion for the causal front propagation. We finally derive an efficient algorithm for extracting an underlying 1D skeleton of the branching objects, with minimal path techniques. Each branch being represented by its centerline, we automatically detect the bifurcations, leading to the "Minimal Tree" representation. This so-called "Minimal Tree" is very useful for visualization and quantification of the pathologies in our anatomical data sets. We illustrate our algorithms by applying it to several arteries datasets.

  7. Segmentation of 3D tubular objects with adaptive front propagation and minimal tree extraction for 3D medical imaging.

    PubMed

    Cohen, Laurent D; Deschamps, Thomas

    2007-08-01

    We present a new fast approach for segmentation of thin branching structures, like vascular trees, based on Fast-Marching (FM) and Level Set (LS) methods. FM allows segmentation of tubular structures by inflating a "long balloon" from a user given single point. However, when the tubular shape is rather long, the front propagation may blow up through the boundary of the desired shape close to the starting point. Our contribution is focused on a method to propagate only the useful part of the front while freezing the rest of it. We demonstrate its ability to segment quickly and accurately tubular and tree-like structures. We also develop a useful stopping criterion for the causal front propagation. We finally derive an efficient algorithm for extracting an underlying 1D skeleton of the branching objects, with minimal path techniques. Each branch being represented by its centerline, we automatically detect the bifurcations, leading to the "Minimal Tree" representation. This so-called "Minimal Tree" is very useful for visualization and quantification of the pathologies in our anatomical data sets. We illustrate our algorithms by applying it to several arteries datasets. PMID:17671862

  8. A generic algorithm for constructing hierarchical representations of geometric objects

    SciTech Connect

    Xavier, P.G.

    1995-10-01

    For a number of years, robotics researchers have exploited hierarchical representations of geometrical objects and scenes in motion-planning, collision-avoidance, and simulation. However, few general techniques exist for automatically constructing them. We present a generic, bottom-up algorithm that uses a heuristic clustering technique to produced balanced, coherent hierarchies. Its worst-case running time is O(N{sup 2}logN), but for non-pathological cases it is O(NlogN), where N is the number of input primitives. We have completed a preliminary C++ implementation for input collections of 3D convex polygons and 3D convex polyhedra and conducted simple experiments with scenes of up to 12,000 polygons, which take only a few minutes to process. We present examples using spheres and convex hulls as hierarchy primitives.

  9. Three-dimensional object recognition using gradient descent and the universal 3-D array grammar

    NASA Astrophysics Data System (ADS)

    Baird, Leemon C., III; Wang, Patrick S. P.

    1992-02-01

    A new algorithm is presented for applying Marill's minimum standard deviation of angles (MSDA) principle for interpreting line drawings without models. Even though no explicit models or additional heuristics are included, the algorithm tends to reach the same 3-D interpretations of 2-D line drawings that humans do. Marill's original algorithm repeatedly generated a set of interpretations and chose the one with the lowest standard deviation of angles (SDA). The algorithm presented here explicitly calculates the partial derivatives of SDA with respect to all adjustable parameters, and follows this gradient to minimize SDA. For a picture with lines meeting at m points forming n angles, the gradient descent algorithm requires O(n) time to adjust all the points, while the original algorithm required O(mn) time to do so. For the pictures described by Marill, this gradient descent algorithm running on a Macintosh II was found to be one to two orders of magnitude faster than the original algorithm running on a Symbolics, while still giving comparable results. Once the 3-D interpretation of the line drawing has been found, the 3-D object can be reduced to a description string using the Universal 3-D Array Grammar. This is a general grammar which allows any connected object represented as a 3-D array of pixels to be reduced to a description string. The algorithm based on this grammar is well suited to parallel computation, and could run efficiently on parallel hardware. This paper describes both the MSDA gradient descent algorithm and the Universal 3-D Array Grammar algorithm. Together, they transform a 2-D line drawing represented as a list of line segments into a string describing the 3-D object pictured. The strings could then be used for object recognition, learning, or storage for later manipulation.

  10. 3D object retrieval with multitopic model combining relevance feedback and LDA model.

    PubMed

    Leng, Biao; Zeng, Jiabei; Yao, Ming; Xiong, Zhang

    2015-01-01

    View-based 3D model retrieval uses a set of views to represent each object. Discovering the complex relationship between multiple views remains challenging in 3D object retrieval. Recent progress in the latent Dirichlet allocation (LDA) model leads us to propose its use for 3D object retrieval. This LDA approach explores the hidden relationships between extracted primordial features of these views. Since LDA is limited to a fixed number of topics, we further propose a multitopic model to improve retrieval performance. We take advantage of a relevance feedback mechanism to balance the contributions of multiple topic models with specified numbers of topics. We demonstrate our improved retrieval performance over the state-of-the-art approaches.

  11. High-purity 3D nano-objects grown by focused-electron-beam induced deposition

    NASA Astrophysics Data System (ADS)

    Córdoba, Rosa; Sharma, Nidhi; Kölling, Sebastian; Koenraad, Paul M.; Koopmans, Bert

    2016-09-01

    To increase the efficiency of current electronics, a specific challenge for the next generation of memory, sensing and logic devices is to find suitable strategies to move from two- to three-dimensional (3D) architectures. However, the creation of real 3D nano-objects is not trivial. Emerging non-conventional nanofabrication tools are required for this purpose. One attractive method is focused-electron-beam induced deposition (FEBID), a direct-write process of 3D nano-objects. Here, we grow 3D iron and cobalt nanopillars by FEBID using diiron nonacarbonyl Fe2(CO)9, and dicobalt octacarbonyl Co2(CO)8, respectively, as starting materials. In addition, we systematically study the composition of these nanopillars at the sub-nanometer scale by atom probe tomography, explicitly mapping the homogeneity of the radial and longitudinal composition distributions. We show a way of fabricating high-purity 3D vertical nanostructures of ∼50 nm in diameter and a few micrometers in length. Our results suggest that the purity of such 3D nanoelements (above 90 at% Fe and above 95 at% Co) is directly linked to their growth regime, in which the selected deposition conditions are crucial for the final quality of the nanostructure. Moreover, we demonstrate that FEBID and the proposed characterization technique not only allow for growth and chemical analysis of single-element structures, but also offers a new way to directly study 3D core–shell architectures. This straightforward concept could establish a promising route to the design of 3D elements for future nano-electronic devices.

  12. High-purity 3D nano-objects grown by focused-electron-beam induced deposition

    NASA Astrophysics Data System (ADS)

    Córdoba, Rosa; Sharma, Nidhi; Kölling, Sebastian; Koenraad, Paul M.; Koopmans, Bert

    2016-09-01

    To increase the efficiency of current electronics, a specific challenge for the next generation of memory, sensing and logic devices is to find suitable strategies to move from two- to three-dimensional (3D) architectures. However, the creation of real 3D nano-objects is not trivial. Emerging non-conventional nanofabrication tools are required for this purpose. One attractive method is focused-electron-beam induced deposition (FEBID), a direct-write process of 3D nano-objects. Here, we grow 3D iron and cobalt nanopillars by FEBID using diiron nonacarbonyl Fe2(CO)9, and dicobalt octacarbonyl Co2(CO)8, respectively, as starting materials. In addition, we systematically study the composition of these nanopillars at the sub-nanometer scale by atom probe tomography, explicitly mapping the homogeneity of the radial and longitudinal composition distributions. We show a way of fabricating high-purity 3D vertical nanostructures of ˜50 nm in diameter and a few micrometers in length. Our results suggest that the purity of such 3D nanoelements (above 90 at% Fe and above 95 at% Co) is directly linked to their growth regime, in which the selected deposition conditions are crucial for the final quality of the nanostructure. Moreover, we demonstrate that FEBID and the proposed characterization technique not only allow for growth and chemical analysis of single-element structures, but also offers a new way to directly study 3D core-shell architectures. This straightforward concept could establish a promising route to the design of 3D elements for future nano-electronic devices.

  13. A convolutional learning system for object classification in 3-D Lidar data.

    PubMed

    Prokhorov, Danil

    2010-05-01

    In this brief, a convolutional learning system for classification of segmented objects represented in 3-D as point clouds of laser reflections is proposed. Several novelties are discussed: (1) extension of the existing convolutional neural network (CNN) framework to direct processing of 3-D data in a multiview setting which may be helpful for rotation-invariant consideration, (2) improvement of CNN training effectiveness by employing a stochastic meta-descent (SMD) method, and (3) combination of unsupervised and supervised training for enhanced performance of CNN. CNN performance is illustrated on a two-class data set of objects in a segmented outdoor environment.

  14. 3D-Web-GIS RFID location sensing system for construction objects.

    PubMed

    Ko, Chien-Ho

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency.

  15. 3D shape shearography with integrated structured light projection for strain inspection of curved objects

    NASA Astrophysics Data System (ADS)

    Anisimov, Andrei G.; Groves, Roger M.

    2015-05-01

    Shearography (speckle pattern shearing interferometry) is a non-destructive testing technique that provides full-field surface strain characterization. Although real-life objects especially in aerospace, transport or cultural heritage are not flat (e.g. aircraft leading edges or sculptures), their inspection with shearography is of interest for both hidden defect detection and material characterization. Accurate strain measuring of a highly curved or free form surface needs to be performed by combining inline object shape measuring and processing of shearography data in 3D. Previous research has not provided a general solution. This research is devoted to the practical questions of 3D shape shearography system development for surface strain characterization of curved objects. The complete procedure of calibration and data processing of a 3D shape shearography system with integrated structured light projector is presented. This includes an estimation of the actual shear distance and a sensitivity matrix correction within the system field of view. For the experimental part a 3D shape shearography system prototype was developed. It employs three spatially-distributed shearing cameras, with Michelson interferometers acting as the shearing devices, one illumination laser source and a structured light projector. The developed system performance was evaluated with a previously reported cylinder specimen (length 400 mm, external diameter 190 mmm) loaded by internal pressure. Further steps for the 3D shape shearography prototype and the technique development are also proposed.

  16. The volume hologram printer to record the wavefront of a 3D object

    NASA Astrophysics Data System (ADS)

    Miyamoto, Osamu; Yamaguchi, Takeshi; Yoshikawa, Hiroshi

    2012-03-01

    A computer-generated hologram (CGH) is well-known to reconstruct 3D image truly, and several CGH printers are reported. Since those printers can only output a transmission hologram, the large-scale optical system is necessary to reconstruct the full parallax and full color image. As a method of a simple reconstruction, it is only necessary to use a volume reflection hologram. However, the making of a volume hologram needs to transfer a CGH by use of an optical system. On the other hand, there are the printers which output volume type holographic stereogram reconstructing the full parallax and full color image. However, the reconstructed image whose depth is large gets blurred due to the insufficient sampling rays of a 3D object. In this study, the authors propose the volume hologram printer to record the wavefront of a 3D object. By transferring the CGH which is displayed on the LCoS, the proposed printer can output a volume hologram. In addition, the large volume hologram can be printed by transferring plural CGH that recorded partial 3D object in turn. As a result, the printed volume hologram has been able to reconstruct a monochrome 3D image by white light, and realized the full parallax image.

  17. Higher-Order Neural Networks Applied to 2D and 3D Object Recognition

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Reid, Max B.

    1994-01-01

    A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.

  18. The 3D representation of the new transformation from the terrestrial to the celestial system.

    NASA Astrophysics Data System (ADS)

    Dehant, V.; de Viron, O.; Capitaine, N.

    2006-08-01

    To study the sky from the Earth or to use navigation satellites, we need two reference systems, a celestial reference system, as fixed as possible with respect to the inertial frame, and a terrestrial reference system, rotating with the Earth. Additionally, we need a way to go from one reference system to the other. This transformation involves the Earth rotation rate, the polar motion, and the precession-nutation. This transformation is done using an intermediate system, in which the Earth rotation it-self is corrected for. Previously one used an intermediate system related to the equinox; the new paradigm involved a point, denoted the Celestial Intermediate Origin (CIO), which, due to its kinematical property of "Non Rotating Origin", allows better describing the length-of-day of the Earth. The use or not of the CIO only affects this intermediate frame. The new transformation system involving the CIO is additionally much simpler. Moreover, the use of the CIO allows an elegant separation between the polar motion, the precession nutation and the rotation rate variation. In this presentation we will show 3D representations that explain all this.

  19. 2D noise propagation in 3D object position determination from a single-perspective projection

    NASA Astrophysics Data System (ADS)

    Habets, Damiaan F.; Pollmann, Steven; Holdsworth, David W.

    2002-05-01

    Image guidance during endovascular intervention is predominantly provided by two-dimensional (2D) digital radiographic systems used for vessel visualization and localization of clips and coils. This paper describes the propagation of 2D noise in the determination of three-dimensional (3D) object position from a single perspective view. In our system, a view is obtained by a digital fluoroscopic x-ray system, corrected for XRII distortions (+/- 0.035mm) and mechanical C-arm shifts (+/- 0.080mm). The tracked object contains high-contrast markers with known relative spacing, allowing for identification and centroid calculation. A least-square projection-Procrustes analysis of the 2D perspective projection is used to determine the 3D position of the object. The effect of uncertainty in 2D marker position on the precision of the 3D object localization using simulations and phantoms was investigated and a nearly linear relationship was found; however, the slope of this relationship is not unity. The slope found indicates a significant amplification of error due to the least-square solution, which is not equally distributed among the 3 major axes. In order to obtain a 3D localization error of less than +/- 1mm, the 2D localization precision must be better than +/- 0.2mm for each marker.

  20. Recognition by Humans and Pigeons of Novel Views of 3-D Objects and Their Photographs

    ERIC Educational Resources Information Center

    Friedman, Alinda; Spetch, Marcia L.; Ferrey, Anne

    2005-01-01

    Humans and pigeons were trained to discriminate between 2 views of actual 3-D objects or their photographs. They were tested on novel views that were either within the closest rotational distance between the training views (interpolated) or outside of that range (extrapolated). When training views were 60? apart, pigeons, but not humans,…

  1. Printing of metallic 3D micro-objects by laser induced forward transfer.

    PubMed

    Zenou, Michael; Kotler, Zvi

    2016-01-25

    Digital printing of 3D metal micro-structures by laser induced forward transfer under ambient conditions is reviewed. Recent progress has allowed drop on demand transfer of molten, femto-liter, metal droplets with a high jetting directionality. Such small volume droplets solidify instantly, on a nanosecond time scale, as they touch the substrate. This fast solidification limits their lateral spreading and allows the fabrication of high aspect ratio and complex 3D metal structures. Several examples of micron-scale resolution metal objects printed using this method are presented and discussed. PMID:26832524

  2. Close-Range Photogrammetric Tools for Small 3d Archeological Objects

    NASA Astrophysics Data System (ADS)

    Samaan, M.; Héno, R.; Pierrot-Deseilligny, M.

    2013-07-01

    This article will focus on the first experiments carried out for our PHD thesis, which is meant to make the new image-based methods available for archeologists. As a matter of fact, efforts need to be made to find cheap, efficient and user-friendly procedures for image acquisition, data processing and quality control. Among the numerous tasks that archeologists have to face daily is the 3D recording of very small objects. The Apero/MicMac tools were used for the georeferencing and the dense correlation procedures. Relatively standard workflows lead to depth maps, which can be represented either as 3D point clouds or shaded relief images.

  3. Identification and Detection of Simple 3D Objects with Severely Blurred Vision

    PubMed Central

    Kallie, Christopher S.; Legge, Gordon E.; Yu, Deyue

    2012-01-01

    Purpose. Detecting and recognizing three-dimensional (3D) objects is an important component of the visual accessibility of public spaces for people with impaired vision. The present study investigated the impact of environmental factors and object properties on the recognition of objects by subjects who viewed physical objects with severely reduced acuity. Methods. The experiment was conducted in an indoor testing space. We examined detection and identification of simple convex objects by normally sighted subjects wearing diffusing goggles that reduced effective acuity to 20/900. We used psychophysical methods to examine the effect on performance of important environmental variables: viewing distance (from 10–24 feet, or 3.05–7.32 m) and illumination (overhead fluorescent and artificial window), and object variables: shape (boxes and cylinders), size (heights from 2–6 feet, or 0.61–1.83 m), and color (gray and white). Results. Object identification was significantly affected by distance, color, height, and shape, as well as interactions between illumination, color, and shape. A stepwise regression analysis showed that 64% of the variability in identification could be explained by object contrast values (58%) and object visual angle (6%). Conclusions. When acuity is severely limited, illumination, distance, color, height, and shape influence the identification and detection of simple 3D objects. These effects can be explained in large part by the impact of these variables on object contrast and visual angle. Basic design principles for improving object visibility are discussed. PMID:23111613

  4. Acquiring 3-D information about thick objects from differential interference contrast images using texture extraction

    NASA Astrophysics Data System (ADS)

    Sierra, Heidy; Brooks, Dana; Dimarzio, Charles

    2010-07-01

    The extraction of 3-D morphological information about thick objects is explored in this work. We extract this information from 3-D differential interference contrast (DIC) images by applying a texture detection method. Texture extraction methods have been successfully used in different applications to study biological samples. A 3-D texture image is obtained by applying a local entropy-based texture extraction method. The use of this method to detect regions of blastocyst mouse embryos that are used in assisted reproduction techniques such as in vitro fertilization is presented as an example. Results demonstrate the potential of using texture detection methods to improve morphological analysis of thick samples, which is relevant to many biomedical and biological studies. Fluorescence and optical quadrature microscope phase images are used for validation.

  5. Cognitive/emotional models for human behavior representation in 3D avatar simulations

    NASA Astrophysics Data System (ADS)

    Peterson, James K.

    2004-08-01

    Simplified models of human cognition and emotional response are presented which are based on models of auditory/ visual polymodal fusion. At the core of these models is a computational model of Area 37 of the temporal cortex which is based on new isocortex models presented recently by Grossberg. These models are trained using carefully chosen auditory (musical sequences), visual (paintings) and higher level abstract (meta level) data obtained from studies of how optimization strategies are chosen in response to outside managerial inputs. The software modules developed are then used as inputs to character generation codes in standard 3D virtual world simulations. The auditory and visual training data also enable the development of simple music and painting composition generators which significantly enhance one's ability to validate the cognitive model. The cognitive models are handled as interacting software agents implemented as CORBA objects to allow the use of multiple language coding choices (C++, Java, Python etc) and efficient use of legacy code.

  6. Status of the phenomena representation, 3D modeling, and cloud-based software architecture development

    SciTech Connect

    Smith, Curtis L.; Prescott, Steven; Kvarfordt, Kellie; Sampath, Ram; Larson, Katie

    2015-09-01

    Early in 2013, researchers at the Idaho National Laboratory outlined a technical framework to support the implementation of state-of-the-art probabilistic risk assessment to predict the safety performance of advanced small modular reactors. From that vision of the advanced framework for risk analysis, specific tasks have been underway in order to implement the framework. This report discusses the current development of a several tasks related to the framework implementation, including a discussion of a 3D physics engine that represents the motion of objects (including collision and debris modeling), cloud-based analysis tools such as a Bayesian-inference engine, and scenario simulations. These tasks were performed during 2015 as part of the technical work associated with the Advanced Reactor Technologies Program.

  7. Recognition of 3D objects for autonomous mobile robot's navigation in automated shipbuilding

    NASA Astrophysics Data System (ADS)

    Lee, Hyunki; Cho, Hyungsuck

    2007-10-01

    Nowadays many parts of shipbuilding process are automated, but the painting process is not, because of the difficulty of automated on-line painting quality measurement, harsh painting environment and the difficulty of robot navigation. However, the painting automation is necessary, because it can provide consistent performance of painting film thickness. Furthermore, autonomous mobile robots are strongly required for flexible painting work. However, the main problem of autonomous mobile robot's navigation is that there are many obstacles which are not expressed in the CAD data. To overcome this problem, obstacle detection and recognition are necessary to avoid obstacles and painting work effectively. Until now many object recognition algorithms have been studied, especially 2D object recognition methods using intensity image have been widely studied. However, in our case environmental illumination does not exist, so these methods cannot be used. To overcome this, to use 3D range data must be used, but the problem of using 3D range data is high computational cost and long estimation time of recognition due to huge data base. In this paper, we propose a 3D object recognition algorithm based on PCA (Principle Component Analysis) and NN (Neural Network). In the algorithm, the novelty is that the measured 3D range data is transformed into intensity information, and then adopts the PCA and NN algorithm for transformed intensity information to reduce the processing time and make the data easy to handle which are disadvantages of previous researches of 3D object recognition. A set of experimental results are shown to verify the effectiveness of the proposed algorithm.

  8. Three 3D graphical representations of DNA primary sequences based on the classifications of DNA bases and their applications.

    PubMed

    Xie, Guosen; Mo, Zhongxi

    2011-01-21

    In this article, we introduce three 3D graphical representations of DNA primary sequences, which we call RY-curve, MK-curve and SW-curve, based on three classifications of the DNA bases. The advantages of our representations are that (i) these 3D curves are strictly non-degenerate and there is no loss of information when transferring a DNA sequence to its mathematical representation and (ii) the coordinates of every node on these 3D curves have clear biological implication. Two applications of these 3D curves are presented: (a) a simple formula is derived to calculate the content of the four bases (A, G, C and T) from the coordinates of nodes on the curves; and (b) a 12-component characteristic vector is constructed to compare similarity among DNA sequences from different species based on the geometrical centers of the 3D curves. As examples, we examine similarity among the coding sequences of the first exon of beta-globin gene from eleven species and validate similarity of cDNA sequences of beta-globin gene from eight species.

  9. Anticipatory Spatial Representation of 3D Regions Explored by Sighted Observers and a Deaf-and-Blind-Observer

    ERIC Educational Resources Information Center

    Intraub, Helene

    2004-01-01

    Viewers who study photographs of scenes tend to remember having seen beyond the boundaries of the view ["boundary extension"; J. Exp. Psychol. Learn. Mem. Cogn. 15 (1989) 179]. Is this a fundamental aspect of scene representation? Forty undergraduates explored bounded regions of six common (3D) scenes, visually or haptically (while blindfolded)…

  10. Numerical Analysis of Electromagnetic Scattering from 3-D Dielectric Objects Using the Yasuura Method

    NASA Astrophysics Data System (ADS)

    Koba, Koichi; Ikuno, Hiroyoshi; Kawano, Mitsunori

    In order to calculate 3-D electromagnetic scattering problems by dielectric objects which we need to solve a big size simultaneous linear equation, we present a rapid algorithm on the Yasuura method where we accelerate the convergence rate of solution by using an array of multipoles as well as a conventional multipole. As a result, we can obtain the radar cross sections of dielectric objects in the optical wave region over a relative wide frequency range and a TDG pulse response. Furthermore, we analyze the scattering data about dielectric objects by using the pulse responses cut by an appropriate window function in the time domain and clarify the scattering processes on dielectric objects.

  11. a Low-Cost and Portable System for 3d Reconstruction of Texture-Less Objects

    NASA Astrophysics Data System (ADS)

    Hosseininaveh, A.; Yazdan, R.; Karami, A.; Moradi, M.; Ghorbani, F.

    2015-12-01

    The optical methods for 3D modelling of objects can be classified into two categories including image-based and range-based methods. Structure from Motion is one of the image-based methods implemented in commercial software. In this paper, a low-cost and portable system for 3D modelling of texture-less objects is proposed. This system includes a rotating table designed and developed by using a stepper motor and a very light rotation plate. The system also has eight laser light sources with very dense and strong beams which provide a relatively appropriate pattern on texture-less objects. In this system, regarding to the step of stepper motor, images are semi automatically taken by a camera. The images can be used in structure from motion procedures implemented in Agisoft software.To evaluate the performance of the system, two dark objects were used. The point clouds of these objects were obtained by spraying a light powders on the objects and exploiting a GOM laser scanner. Then these objects were placed on the proposed turntable. Several convergent images were taken from each object while the laser light sources were projecting the pattern on the objects. Afterward, the images were imported in VisualSFM as a fully automatic software package for generating an accurate and complete point cloud. Finally, the obtained point clouds were compared to the point clouds generated by the GOM laser scanner. The results showed the ability of the proposed system to produce a complete 3D model from texture-less objects.

  12. Registration of untypical 3D objects in Polish cadastre - do we need 3D cadastre? / Rejestracja nietypowych obiektów 3D w polskim katastrze - czy istnieje potrzeba wdrożenia katastru 3D?

    NASA Astrophysics Data System (ADS)

    Marcin, Karabin

    2012-11-01

    Polish cadastral system consists of two registers: cadastre and land register. The cadastre register data on cadastral objects (land, buildings and premises) in particular location (in a two-dimensional coordinate system) and their attributes as well as data about the owners. The land register contains data concerned ownerships and other rights to the property. Registration of a land parcel without spatial objects located on the surface is not problematic. Registration of buildings and premises in typical cases is not a problem either. The situation becomes more complicated in cases of multiple use of space above the parcel and with more complex construction of the buildings. The paper presents rules concerning the registration of various untypical 3D objects located within the city of Warsaw. The analysis of the data concerning those objects registered in the cadastre and land register is presented in the paper. And this is the next part of the author's detailed research. The aim of this paper is to answer the question if we really need 3D cadastre in Poland. Polski system katastralny składa się z dwóch rejestrów: ewidencji gruntów i budynków (katastru nieruchomosci) oraz ksiąg wieczystych. W ewidencji gruntów i budynków (katastrze nieruchomości) rejestrowane są dane o położeniu (w dwuwymiarowym układzie współrzędnych), atrybuty oraz dane o właścicielach obiektów katastralnych (działek, budynków i lokali), w księgach wieczystych oprócz danych właścicielskich, inne prawa do nieruchomości. Rejestracja działki bez obiektów przestrzennych położonych na jej powierzchni nie stanowi problemu. Także rejestracja budynków i lokali w typowych przypadkach nie stanowi trudności. Sytuacja staje się bardziej skomplikowana w przypadku wielokrotnego użytkowania przestrzeni powyzej lub poniżej powierzchni działki oraz w przypadku budynków o złożonej konstrukcji. W artykule przedstawiono zasady związane z rejestracją nietypowych obiektów 3

  13. Improving object detection in 2D images using a 3D world model

    NASA Astrophysics Data System (ADS)

    Viggh, Herbert E. M.; Cho, Peter L.; Armstrong-Crews, Nicholas; Nam, Myra; Shah, Danelle C.; Brown, Geoffrey E.

    2014-05-01

    A mobile robot operating in a netcentric environment can utilize offboard resources on the network to improve its local perception. One such offboard resource is a world model built and maintained by other sensor systems. In this paper we present results from research into improving the performance of Deformable Parts Model object detection algorithms by using an offboard 3D world model. Experiments were run for detecting both people and cars in 2D photographs taken in an urban environment. After generating candidate object detections, a 3D world model built from airborne Light Detection and Ranging (LIDAR) and aerial photographs was used to filter out false alarm using several types of geometric reasoning. Comparison of the baseline detection performance to the performance after false alarm filtering showed a significant decrease in false alarms for a given probability of detection.

  14. 220GHz wideband 3D imaging radar for concealed object detection technology development and phenomenology studies

    NASA Astrophysics Data System (ADS)

    Robertson, Duncan A.; Macfarlane, David G.; Bryllert, Tomas

    2016-05-01

    We present a 220 GHz 3D imaging `Pathfinder' radar developed within the EU FP7 project CONSORTIS (Concealed Object Stand-Off Real-Time Imaging for Security) which has been built to address two objectives: (i) to de-risk the radar hardware development and (ii) to enable the collection of phenomenology data with ~1 cm3 volumetric resolution. The radar combines a DDS-based chirp generator and self-mixing multiplier technology to achieve a 30 GHz bandwidth chirp with such high linearity that the raw point response is close to ideal and only requires minor nonlinearity compensation. The single transceiver is focused with a 30 cm lens mounted on a gimbal to acquire 3D volumetric images of static test targets and materials.

  15. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    PubMed

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  16. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    PubMed Central

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  17. Rapid object indexing using locality sensitive hashing and joint 3D-signature space estimation.

    PubMed

    Matei, Bogdan; Shan, Ying; Sawhney, Harpreet S; Tan, Yi; Kumar, Rakesh; Huber, Daniel; Hebert, Martial

    2006-07-01

    We propose a new method for rapid 3D object indexing that combines feature-based methods with coarse alignment-based matching techniques. Our approach achieves a sublinear complexity on the number of models, maintaining at the same time a high degree of performance for real 3D sensed data that is acquired in largely uncontrolled settings. The key component of our method is to first index surface descriptors computed at salient locations from the scene into the whole model database using the Locality Sensitive Hashing (LSH), a probabilistic approximate nearest neighbor method. Progressively complex geometric constraints are subsequently enforced to further prune the initial candidates and eliminate false correspondences due to inaccuracies in the surface descriptors and the errors of the LSH algorithm. The indexed models are selected based on the MAP rule using posterior probability of the models estimated in the joint 3D-signature space. Experiments with real 3D data employing a large database of vehicles, most of them very similar in shape, containing 1,000,000 features from more than 365 models demonstrate a high degree of performance in the presence of occlusion and obscuration, unmodeled vehicle interiors and part articulations, with an average processing time between 50 and 100 seconds per query.

  18. An object-oriented simulator for 3D digital breast tomosynthesis imaging system.

    PubMed

    Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468

  19. Blind robust watermarking schemes for copyright protection of 3D mesh objects.

    PubMed

    Zafeiriou, Stefanos; Tefas, Anastasios; Pitas, Ioannis

    2005-01-01

    In this paper, two novel methods suitable for blind 3D mesh object watermarking applications are proposed. The first method is robust against 3D rotation, translation, and uniform scaling. The second one is robust against both geometric and mesh simplification attacks. A pseudorandom watermarking signal is cast in the 3D mesh object by deforming its vertices geometrically, without altering the vertex topology. Prior to watermark embedding and detection, the object is rotated and translated so that its center of mass and its principal component coincide with the origin and the z-axis of the Cartesian coordinate system. This geometrical transformation ensures watermark robustness to translation and rotation. Robustness to uniform scaling is achieved by restricting the vertex deformations to occur only along the r coordinate of the corresponding (r, theta, phi) spherical coordinate system. In the first method, a set of vertices that correspond to specific angles theta is used for watermark embedding. In the second method, the samples of the watermark sequence are embedded in a set of vertices that correspond to a range of angles in the theta domain in order to achieve robustness against mesh simplifications. Experimental results indicate the ability of the proposed method to deal with the aforementioned attacks.

  20. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation.

    PubMed

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454

  1. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    PubMed Central

    Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  2. Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

    PubMed Central

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454

  3. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    PubMed

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  4. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    PubMed

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-04-20

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  5. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation.

    PubMed

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-12-12

    Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.

  6. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Smith, Philip W.

    1993-01-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  7. Applying Mean-Shift - Clustering for 3D object detection in remote sensing data

    NASA Astrophysics Data System (ADS)

    Simon, Jürgen-Lorenz; Diederich, Malte; Troemel, Silke

    2013-04-01

    The timely warning and forecasting of high-impact weather events is crucial for life, safety and economy. Therefore, the development and improvement of methods for detection and nowcasting / short-term forecasting of these events is an ongoing research question. A new 3D object detection and tracking algorithm is presented. Within the project "object-based analysis and seamless predictin (OASE)" we address a better understanding and forecasting of convective events based on the synergetic use of remotely sensed data and new methods for detection, nowcasting, validation and assimilation. In order to gain advanced insight into the lifecycle of convective cells, we perform an object-detection on a new high-resolution 3D radar- and satellite based composite and plan to track the detected objects over time, providing us with a model of the lifecycle. The insights in the lifecycle will be used in order to improve prediction of convective events in the nowcasting time scale, as well as a new type of data to be assimilated into numerical weather models, thus seamlessly bridging the gap between nowcasting and NWP.. The object identification (or clustering) is performed using a technique borrowed from computer vision, called mean-shift clustering. Mean-Shift clustering works without many of the parameterizations or rigid threshold schemes employed by many existing schemes (e. g. KONRAD, TITAN, Trace-3D), which limit the tracking to fully matured, convective cells of significant size and/or strength. Mean-Shift performs without such limiting definitions, providing a wider scope for studying larger classes of phenomena and providing a vehicle for research into the object definition itself. Since the mean-shift clustering technique could be applied on many types of remote-sensing and model data for object detection, it is of general interest to the remote sensing and modeling community. The focus of the presentation is the introduction of this technique and the results of its

  8. Study on Information Management for the Conservation of Traditional Chinese Architectural Heritage - 3d Modelling and Metadata Representation

    NASA Astrophysics Data System (ADS)

    Yen, Y. N.; Weng, K. H.; Huang, H. Y.

    2013-07-01

    After over 30 years of practise and development, Taiwan's architectural conservation field is moving rapidly into digitalization and its applications. Compared to modern buildings, traditional Chinese architecture has considerably more complex elements and forms. To document and digitize these unique heritages in their conservation lifecycle is a new and important issue. This article takes the caisson ceiling of the Taipei Confucius Temple, octagonal with 333 elements in 8 types, as a case study for digitization practise. The application of metadata representation and 3D modelling are the two key issues to discuss. Both Revit and SketchUp were appliedin this research to compare its effectiveness to metadata representation. Due to limitation of the Revit database, the final 3D models wasbuilt with SketchUp. The research found that, firstly, cultural heritage databasesmustconvey that while many elements are similar in appearance, they are unique in value; although 3D simulations help the general understanding of architectural heritage, software such as Revit and SketchUp, at this stage, could onlybe used tomodel basic visual representations, and is ineffective indocumenting additional critical data ofindividually unique elements. Secondly, when establishing conservation lifecycle information for application in management systems, a full and detailed presentation of the metadata must also be implemented; the existing applications of BIM in managing conservation lifecycles are still insufficient. Results of the research recommends SketchUp as a tool for present modelling needs, and BIM for sharing data between users, but the implementation of metadata representation is of the utmost importance.

  9. Determining canonical views of 3D object using minimum description length criterion and compressive sensing method

    NASA Astrophysics Data System (ADS)

    Chen, Ping-Feng; Krim, Hamid

    2008-02-01

    In this paper, we propose using two methods to determine the canonical views of 3D objects: minimum description length (MDL) criterion and compressive sensing method. MDL criterion searches for the description length that achieves the balance between model accuracy and parsimony. It takes the form of the sum of a likelihood and a penalizing term, where the likelihood is in favor of model accuracy such that more views assists the description of an object, while the second term penalizes lengthy description to prevent overfitting of the model. In order to devise the likelihood term, we propose a model to represent a 3D object as the weighted sum of multiple range images, which is used in the second method to determine the canonical views as well. In compressive sensing method, an intelligent way of parsimoniously sampling an object is presented. We make direct inference from Donoho1 and Candes'2 work, and adapt it to our model. Each range image is viewed as a projection, or a sample, of a 3D model, and by using compressive sensing theory, we are able to reconstruct the object with an overwhelming probability by scarcely sensing the object in a random manner. Compressive sensing is different from traditional compressing method in the sense that the former compress things in the sampling stage while the later collects a large number of samples and then compressing mechanism is carried out thereafter. Compressive sensing scheme is particularly useful when the number of sensors are limited or the sampling machinery cost much resource or time.

  10. Probabilistic 3D object recognition and pose estimation using multiple interpretations generation.

    PubMed

    Lu, Zhaojin; Lee, Sukhan

    2011-12-01

    This paper presents a probabilistic object recognition and pose estimation method using multiple interpretation generation in cluttered indoor environments. How to handle pose ambiguity and uncertainty is the main challenge in most recognition systems. In order to solve this problem, we approach it in a probabilistic manner. First, given a three-dimensional (3D) polyhedral object model, the parallel and perpendicular line pairs, which are detected from stereo images and 3D point clouds, generate pose hypotheses as multiple interpretations, with ambiguity from partial occlusion and fragmentation of 3D lines especially taken into account. Different from the previous methods, each pose interpretation is represented as a region instead of a point in pose space reflecting the measurement uncertainty. Then, for each pose interpretation, more features around the estimated pose are further utilized as additional evidence for computing the probability using the Bayesian principle in terms of likelihood and unlikelihood. Finally, fusion strategy is applied to the top ranked interpretations with high probabilities, which are further verified and refined to give a more accurate pose estimation in real time. The experimental results show the performance and potential of the proposed approach in real cluttered domestic environments.

  11. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  12. Study of improved ray tracing parallel algorithm for CGH of 3D objects on GPU

    NASA Astrophysics Data System (ADS)

    Cong, Bin; Jiang, Xiaoyu; Yao, Jun; Zhao, Kai

    2014-11-01

    An improved parallel algorithm for holograms of three-dimensional objects was presented. According to the physical characteristics and mathematical properties of the original ray tracing algorithm for computer generated holograms (CGH), using transform approximation and numerical analysis methods, we extract parts of ray tracing algorithm which satisfy parallelization features and implement them on graphics processing unit (GPU). Meanwhile, through proper design of parallel numerical procedure, we did parallel programming to the two-dimensional slices of three-dimensional object with CUDA. According to the experiments, an effective method of dealing with occlusion problem in ray tracing is proposed, as well as generating the holograms of 3D objects with additive property. Our results indicate that the improved algorithm can effectively shorten the computing time. Due to the different sizes of spatial object points and hologram pixels, the speed has increased 20 to 70 times comparing with original ray tracing algorithm.

  13. Non-destructive 3D shape measurement of transparent and black objects with thermal fringes

    NASA Astrophysics Data System (ADS)

    Brahm, Anika; Rößler, Conrad; Dietrich, Patrick; Heist, Stefan; Kühmstedt, Peter; Notni, Gunther

    2016-05-01

    Fringe projection is a well-established optical method for the non-destructive contactless three-dimensional (3D) measurement of object surfaces. Typically, fringe sequences in the visible wavelength range (VIS) are projected onto the surfaces of objects to be measured and are observed by two cameras in a stereo vision setup. The reconstruction is done by finding corresponding pixels in both cameras followed by triangulation. Problems can occur if the properties of some materials disturb the measurements. If the objects are transparent, translucent, reflective, or strongly absorbing in the VIS range, the projected patterns cannot be recorded properly. To overcome these challenges, we present a new alternative approach in the infrared (IR) region of the electromagnetic spectrum. For this purpose, two long-wavelength infrared (LWIR) cameras (7.5 - 13 μm) are used to detect the emitted heat radiation from surfaces which is induced by a pattern projection unit driven by a CO2 laser (10.6 μm). Thus, materials like glass or black objects, e.g. carbon fiber materials, can be measured non-destructively without the need of any additional paintings. We will demonstrate the basic principles of this heat pattern approach and show two types of 3D systems based on a freeform mirror and a GOBO wheel (GOes Before Optics) projector unit.

  14. The 3-D alignment of objects in dynamic PET scans using filtered sinusoidal trajectories of sinogram

    NASA Astrophysics Data System (ADS)

    Kostopoulos, Aristotelis E.; Happonen, Antti P.; Ruotsalainen, Ulla

    2006-12-01

    In this study, our goal is to employ a novel 3-D alignment method for dynamic positron emission tomography (PET) scans. Because the acquired data (i.e. sinograms) often contain noise considerably, filtering of the data prior to the alignment presumably improves the final results. In this study, we utilized a novel 3-D stackgram domain approach. In the stackgram domain, the signals along the sinusoidal trajectory signals of the sinogram can be processed separately. In this work, we performed angular stackgram domain filtering by employing well known 1-D filters: the Gaussian low-pass filter and the median filter. In addition, we employed two wavelet de-noising techniques. After filtering we performed alignment of objects in the stackgram domain. The local alignment technique we used is based on similarity comparisons between locus vectors (i.e. the signals along the sinusoidal trajectories of the sinogram) in a 3-D neighborhood of sequences of the stackgrams. Aligned stackgrams can be transformed back to sinograms (Method 1), or alternatively directly to filtered back-projected images (Method 2). In order to evaluate the alignment process, simulated data with different kinds of additive noises were used. The results indicated that the filtering prior to the alignment can be important concerning the accuracy.

  15. Uncertainty analysis in 3D global models: Aerosol representation in MOZART-4

    NASA Astrophysics Data System (ADS)

    Gasore, J.; Prinn, R. G.

    2012-12-01

    The Probabilistic Collocation Method (PCM) has been proven to be an efficient general method of uncertainty analysis in atmospheric models (Tatang et al 1997, Cohen&Prinn 2011). However, its application has been mainly limited to urban- and regional-scale models and chemical source-sink models, because of the drastic increase in computational cost when the dimension of uncertain parameters increases. Moreover, the high-dimensional output of global models has to be reduced to allow a computationally reasonable number of polynomials to be generated. This dimensional reduction has been mainly achieved by grouping the model grids into a few regions based on prior knowledge and expectations; urban versus rural for instance. As the model output is used to estimate the coefficients of the polynomial chaos expansion (PCE), the arbitrariness in the regional aggregation can generate problems in estimating uncertainties. To address these issues in a complex model, we apply the probabilistic collocation method of uncertainty analysis to the aerosol representation in MOZART-4, which is a 3D global chemical transport model (Emmons et al., 2010). Thereafter, we deterministically delineate the model output surface into regions of homogeneous response using the method of Principal Component Analysis. This allows the quantification of the uncertainty associated with the dimensional reduction. Because only a bulk mass is calculated online in Mozart-4, a lognormal number distribution is assumed with a priori fixed scale and location parameters, to calculate the surface area for heterogeneous reactions involving tropospheric oxidants. We have applied the PCM to the six parameters of the lognormal number distributions of Black Carbon, Organic Carbon and Sulfate. We have carried out a Monte-Carlo sampling from the probability density functions of the six uncertain parameters, using the reduced PCE model. The global mean concentration of major tropospheric oxidants did not show a

  16. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  17. Measuring the 3D shape of high temperature objects using blue sinusoidal structured light

    NASA Astrophysics Data System (ADS)

    Zhao, Xianling; Liu, Jiansheng; Zhang, Huayu; Wu, Yingchun

    2015-12-01

    The visible light radiated by some high temperature objects (less than 1200 °C) almost lies in the red and infrared waves. It will interfere with structured light projected on a forging surface if phase measurement profilometry (PMP) is used to measure the shapes of objects. In order to obtain a clear deformed pattern image, a 3D measurement method based on blue sinusoidal structured light is proposed in this present work. Moreover, a method for filtering deformed pattern images is presented for correction of the unwrapping phase. Blue sinusoidal phase-shifting fringe pattern images are projected on the surface by a digital light processing (DLP) projector, and then the deformed patterns are captured by a 3-CCD camera. The deformed pattern images are separated into R, G and B color components by the software. The B color images filtered by a low-pass filter are used to calculate the fringe order. Consequently, the 3D shape of a high temperature object is obtained by the unwrapping phase and the calibration parameter matrixes of the DLP projector and 3-CCD camera. The experimental results show that the unwrapping phase is completely corrected with the filtering method by removing the high frequency noise from the first harmonic of the B color images. The measurement system can complete the measurement in a few seconds with a relative error of less than 1 : 1000.

  18. Orienting Attention to Sound Object Representations Attenuates Change Deafness

    ERIC Educational Resources Information Center

    Backer, Kristina C.; Alain, Claude

    2012-01-01

    According to the object-based account of attention, multiple objects coexist in short-term memory (STM), and we can selectively attend to a particular object of interest. Although there is evidence that attention can be directed to visual object representations, the assumption that attention can be oriented to sound object representations has yet…

  19. FMRI Reveals a Dissociation between Grasping and Perceiving the Size of Real 3D Objects

    PubMed Central

    Cavina-Pratesi, Cristiana; Goodale, Melvyn A.; Culham, Jody C.

    2007-01-01

    Background Almost 15 years after its formulation, evidence for the neuro-functional dissociation between a dorsal action stream and a ventral perception stream in the human cerebral cortex is still based largely on neuropsychological case studies. To date, there is no unequivocal evidence for separate visual computations of object features for performance of goal-directed actions versus perceptual tasks in the neurologically intact human brain. We used functional magnetic resonance imaging to test explicitly whether or not brain areas mediating size computation for grasping are distinct from those mediating size computation for perception. Methodology/Principal Findings Subjects were presented with the same real graspable 3D objects and were required to perform a number of different tasks: grasping, reaching, size discrimination, pattern discrimination or passive viewing. As in prior studies, the anterior intraparietal area (AIP) in the dorsal stream was more active during grasping, when object size was relevant for planning the grasp, than during reaching, when object properties were irrelevant for movement planning (grasping>reaching). Activity in AIP showed no modulation, however, when size was computed in the context of a purely perceptual task (size = pattern discrimination). Conversely, the lateral occipital (LO) cortex in the ventral stream was modulated when size was computed for perception (size>pattern discrimination) but not for action (grasping = reaching). Conclusions/Significance While areas in both the dorsal and ventral streams responded to the simple presentation of 3D objects (passive viewing), these areas were differentially activated depending on whether the task was grasping or perceptual discrimination, respectively. The demonstration of dual coding of an object for the purposes of action on the one hand and perception on the other in the same healthy brains offers a substantial contribution to the current debate about the nature of

  20. Analogue Representations of Spatial Objects and Tranformations.

    ERIC Educational Resources Information Center

    Cooper, Lynn A.

    Considerable discussion and debate have been devoted to the extent and nature of structural or functional correspondence between internal representations and their external visual counterparts. An analogue representation or process is one in which the relational structure of external events is preserved in the corresponding internal…

  1. Cosine series representation of 3D curves and its application to white matter fiber bundles in diffusion tensor imaging

    PubMed Central

    Adluru, Nagesh; Lee, Jee Eun; Lazar, Mariana; Lainhart, Janet E.; Alexander, Andrew L.

    2011-01-01

    We present a novel cosine series representation for encoding fiber bundles consisting of multiple 3D curves. The coordinates of curves are parameterized as coefficients of cosine series expansion. We address the issue of registration, averaging and statistical inference on curves in a unified Hilbert space framework. Unlike traditional splines, the proposed method does not have internal knots and explicitly represents curves as a linear combination of cosine basis. This simplicity in the representation enables us to design statistical models, register curves and perform subsequent analysis in a more unified statistical framework than splines. The proposed representation is applied in characterizing abnormal shape of white matter fiber tracts passing through the splenium of the corpus callosum in autistic subjects. For an arbitrary tract, a 19 degree expansion is usually found to be sufficient to reconstruct the tract with 60 parameters. PMID:23316267

  2. Computational hologram synthesis and representation on spatial light modulators for real-time 3D holographic imaging

    NASA Astrophysics Data System (ADS)

    Reichelt, Stephan; Leister, Norbert

    2013-02-01

    In dynamic computer-generated holography that utilizes spatial light modulators, both hologram synthesis and hologram representation are essential in terms of fast computation and high reconstruction quality. For hologram synthesis, i.e. the computation step, Fresnel transform based or point-source based raytracing methods can be applied. In the encoding step, the complex wave-field has to be optimally represented by the SLM with its given modulation capability. For proper hologram reconstruction that implies a simultaneous and independent amplitude and phase modulation of the input wave-field by the SLM. In this paper, we discuss full complex hologram representation methods on SLMs by considering inherent SLM parameter such as modulation type and bit depth on their reconstruction performance such as diffraction efficiency and SNR. We review the three implementation schemes of Burckhardt amplitude-only representation, phase-only macro-pixel representation, and two-phase interference representation. Besides the optical performance we address their hardware complexity and required computational load. Finally, we experimentally demonstrate holographic reconstructions of different representation schemes as obtained by functional prototypes utilizing SeeReal's viewing-window holographic display technology. The proposed hardware implementations enable a fast encoding of complex-valued hologram data and thus will pave the way for commercial real-time holographic 3D imaging in the near future.

  3. Perception of physical stability and center of mass of 3-D objects

    PubMed Central

    Cholewiak, Steven A.; Fleming, Roland W.; Singh, Manish

    2015-01-01

    Humans can judge from vision alone whether an object is physically stable or not. Such judgments allow observers to predict the physical behavior of objects, and hence to guide their motor actions. We investigated the visual estimation of physical stability of 3-D objects (shown in stereoscopically viewed rendered scenes) and how it relates to visual estimates of their center of mass (COM). In Experiment 1, observers viewed an object near the edge of a table and adjusted its tilt to the perceived critical angle, i.e., the tilt angle at which the object was seen as equally likely to fall or return to its upright stable position. In Experiment 2, observers visually localized the COM of the same set of objects. In both experiments, observers' settings were compared to physical predictions based on the objects' geometry. In both tasks, deviations from physical predictions were, on average, relatively small. More detailed analyses of individual observers' settings in the two tasks, however, revealed mutual inconsistencies between observers' critical-angle and COM settings. The results suggest that observers did not use their COM estimates in a physically correct manner when making visual judgments of physical stability. PMID:25761331

  4. Perception of physical stability and center of mass of 3-D objects.

    PubMed

    Cholewiak, Steven A; Fleming, Roland W; Singh, Manish

    2015-02-10

    Humans can judge from vision alone whether an object is physically stable or not. Such judgments allow observers to predict the physical behavior of objects, and hence to guide their motor actions. We investigated the visual estimation of physical stability of 3-D objects (shown in stereoscopically viewed rendered scenes) and how it relates to visual estimates of their center of mass (COM). In Experiment 1, observers viewed an object near the edge of a table and adjusted its tilt to the perceived critical angle, i.e., the tilt angle at which the object was seen as equally likely to fall or return to its upright stable position. In Experiment 2, observers visually localized the COM of the same set of objects. In both experiments, observers' settings were compared to physical predictions based on the objects' geometry. In both tasks, deviations from physical predictions were, on average, relatively small. More detailed analyses of individual observers' settings in the two tasks, however, revealed mutual inconsistencies between observers' critical-angle and COM settings. The results suggest that observers did not use their COM estimates in a physically correct manner when making visual judgments of physical stability.

  5. Recognizing objects in 3D point clouds with multi-scale local features.

    PubMed

    Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie

    2014-01-01

    Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms.

  6. 3D Imaging with a Single-Aperture 3-mm Objective Lens: Concept, Fabrication and Test

    NASA Technical Reports Server (NTRS)

    Korniski, Ron; Bae, Sam Y.; Shearn, Mike; Manohara, Harish; Shahinian, Hrayr

    2011-01-01

    There are many advantages to minimally invasive surgery (MIS). An endoscope is the optical system of choice by the surgeon for MIS. The smaller the incision or opening made to perform the surgery, the smaller the optical system needed. For minimally invasive neurological and skull base surgeries the openings are typically 10-mm in diameter (dime sized) or less. The largest outside diameter (OD) endoscope used is 4mm. A significant drawback to endoscopic MIS is that it only provides a monocular view of the surgical site thereby lacking depth information for the surgeon. A stereo view would provide the surgeon instantaneous depth information of the surroundings within the field of view, a significant advantage especially during brain surgery. Providing 3D imaging in an endoscopic objective lens system presents significant challenges because of the tight packaging constraints. This paper presents a promising new technique for endoscopic 3D imaging that uses a single lens system with complementary multi-bandpass filters (CMBFs), and describes the proof-of-concept demonstrations performed to date validating the technique. These demonstrations of the technique have utilized many commercial off-the-shelf (COTS) components including the ones used in the endoscope objective.

  7. A method of 3D object recognition and localization in a cloud of points

    NASA Astrophysics Data System (ADS)

    Bielicki, Jerzy; Sitnik, Robert

    2013-12-01

    The proposed method given in this article is prepared for analysis of data in the form of cloud of points directly from 3D measurements. It is designed for use in the end-user applications that can directly be integrated with 3D scanning software. The method utilizes locally calculated feature vectors (FVs) in point cloud data. Recognition is based on comparison of the analyzed scene with reference object library. A global descriptor in the form of a set of spatially distributed FVs is created for each reference model. During the detection process, correlation of subsets of reference FVs with FVs calculated in the scene is computed. Features utilized in the algorithm are based on parameters, which qualitatively estimate mean and Gaussian curvatures. Replacement of differentiation with averaging in the curvatures estimation makes the algorithm more resistant to discontinuities and poor quality of the input data. Utilization of the FV subsets allows to detect partially occluded and cluttered objects in the scene, while additional spatial information maintains false positive rate at a reasonably low level.

  8. Real-scale 3D models of the scoliotic spine from biplanar radiography without calibration objects.

    PubMed

    Moura, Daniel C; Barbosa, Jorge G

    2014-10-01

    This paper presents a new method for modelling the spines of subjects and making accurate 3D measurements using standard radiologic systems without requiring calibration objects. The method makes use of the focal distance and statistical models for estimating the geometrical parameters of the system. A dataset of 32 subjects was used to assess this method. The results show small errors for the main clinical indices, such as an RMS error of 0.49° for the Cobb angle, 0.50° for kyphosis, 0.38° for lordosis, and 2.62mm for the spinal length. This method is the first to achieve this level of accuracy without requiring the use of calibration objects when acquiring radiographs. We conclude that the proposed method allows for the evaluation of scoliosis with a much simpler setup than currently available methods. PMID:24908193

  9. Efficient 3D modeling of buildings using a priori geometric object information

    NASA Astrophysics Data System (ADS)

    Van den Heuvel, Frank A.; Vosselman, George

    1997-07-01

    The subject of this paper is the research that aims at efficiency improvement of acquisition of 3D building models from digital images for Computer Aided Architectural Design (CAAD). The results do not only apply to CAAD, but to all applications where polyhedral objects are involved. The research is concentrated on the integration of a priori geometric object information in the modeling process. Parallelism and perpendicularity are examples of the a priori information to be used. This information leads to geometric constraints in the mathematical model. This model can be formulated using condition equations with observations only. The advantage is that the adjustment does not include object parameters and the geometric constraints can be incorporated in the model sequentially. As with the use of observation equations statistical testing can be applied to verify the constraints. For the initial values of orientation parameters of the images we use a direct solution based on a priori object information as well. For this method only two sets of (coplanar) parallel lines in object space are required. The paper concentrates on the mathematical model with image lines as the main type of observations. Advantages as well as disadvantages of a mathematical model with only condition equations are discussed. The parametrization of the object model plays a major role in this discussion.

  10. Knowledge guided object detection and identification in 3D point clouds

    NASA Astrophysics Data System (ADS)

    Karmacharya, A.; Boochs, F.; Tietz, B.

    2015-05-01

    Modern instruments like laser scanner and 3D cameras or image based techniques like structure from motion produce huge point clouds as base for further object analysis. This has considerably changed the way of data compilation away from selective manually guided processes towards automatic and computer supported strategies. However it's still a long way to achieve the quality and robustness of manual processes as data sets are mostly very complex. Looking at existing strategies 3D data processing for object detections and reconstruction rely heavily on either data driven or model driven approaches. These approaches come with their limitation on depending highly on the nature of data and inability to handle any deviation. Furthermore, the lack of capabilities to integrate other data or information in between the processing steps further exposes their limitations. This restricts the approaches to be executed with strict predefined strategy and does not allow deviations when and if new unexpected situations arise. We propose a solution that induces intelligence in the processing activities through the usage of semantics. The solution binds the objects along with other related knowledge domains to the numerical processing to facilitate the detection of geometries and then uses experts' inference rules to annotate them. The solution was tested within the prototypical application of the research project "Wissensbasierte Detektion von Objekten in Punktwolken für Anwendungen im Ingenieurbereich (WiDOP)". The flexibility of the solution is demonstrated through two entirely different USE Case scenarios: Deutsche Bahn (German Railway System) for the outdoor scenarios and Fraport (Frankfort Airport) for the indoor scenarios. Apart from the difference in their environments, they provide different conditions, which the solution needs to consider. While locations of the objects in Fraport were previously known, that of DB were not known at the beginning.

  11. Active learning in the lecture theatre using 3D printed objects.

    PubMed

    Smith, David P

    2016-01-01

    The ability to conceptualize 3D shapes is central to understanding biological processes. The concept that the structure of a biological molecule leads to function is a core principle of the biochemical field. Visualisation of biological molecules often involves vocal explanations or the use of two dimensional slides and video presentations. A deeper understanding of these molecules can however be obtained by the handling of objects. 3D printed biological molecules can be used as active learning tools to stimulate engagement in large group lectures. These models can be used to build upon initial core knowledge which can be delivered in either a flipped form or a more didactic manner. Within the teaching session the students are able to learn by handling, rotating and viewing the objects to gain an appreciation, for example, of an enzyme's active site or the difference between the major and minor groove of DNA. Models and other artefacts can be handled in small groups within a lecture theatre and act as a focal point to generate conversation. Through the approach presented here core knowledge is first established and then supplemented with high level problem solving through a "Think-Pair-Share" cooperative learning strategy. The teaching delivery was adjusted based around experiential learning activities by moving the object from mental cognition and into the physical environment. This approach led to students being able to better visualise biological molecules and a positive engagement in the lecture. The use of objects in teaching allows the lecturer to create interactive sessions that both challenge and enable the student. PMID:27366318

  12. Active learning in the lecture theatre using 3D printed objects

    PubMed Central

    Smith, David P.

    2016-01-01

    The ability to conceptualize 3D shapes is central to understanding biological processes. The concept that the structure of a biological molecule leads to function is a core principle of the biochemical field. Visualisation of biological molecules often involves vocal explanations or the use of two dimensional slides and video presentations. A deeper understanding of these molecules can however be obtained by the handling of objects. 3D printed biological molecules can be used as active learning tools to stimulate engagement in large group lectures. These models can be used to build upon initial core knowledge which can be delivered in either a flipped form or a more didactic manner. Within the teaching session the students are able to learn by handling, rotating and viewing the objects to gain an appreciation, for example, of an enzyme’s active site or the difference between the major and minor groove of DNA. Models and other artefacts can be handled in small groups within a lecture theatre and act as a focal point to generate conversation. Through the approach presented here core knowledge is first established and then supplemented with high level problem solving through a "Think-Pair-Share" cooperative learning strategy. The teaching delivery was adjusted based around experiential learning activities by moving the object from mental cognition and into the physical environment. This approach led to students being able to better visualise biological molecules and a positive engagement in the lecture. The use of objects in teaching allows the lecturer to create interactive sessions that both challenge and enable the student. PMID:27366318

  13. A Validity Study of Two Projective Object Representations Measures.

    ERIC Educational Resources Information Center

    Hibbard, Stephen; And Others

    1995-01-01

    Two projective measures of object representations, the Concept of the Object on the Rorschach and the Social Cognition and Object Relations Scales, were compared with each other and measures of intelligence and pathology with 15 children and 94 adult patients. Results support the construct validity of object representations. (SLD)

  14. Representation of chemical information in OASIS centralized 3D database for existing chemicals.

    PubMed

    Nikolov, Nikolai; Grancharov, Vanio; Stoyanova, Galya; Pavlov, Todor; Mekenyan, Ovanes

    2006-01-01

    The present inventory of existing chemicals in regulatory agencies in North America and Europe, encompassing the chemicals of the European Chemicals Bureau (EINECS, with 61 573 discrete chemicals); the Danish EPA (159 448 chemicals); the U.S. EPA (TSCA, 56 882 chemicals; HPVC, 10 546 chemicals) and pesticides' active and inactive ingredients of the U.S. EPA (1379 chemicals); the Organization for Economic Cooperation and Development (HPVC, 4750 chemicals); Environment Canada (DSL, 10851 chemicals); and the Japanese Ministry of Economy, Trade, and Industry (16811), was combined in a centralized 3D database for existing chemicals. The total number of unique chemicals from all of these databases exceeded 185 500. Defined and undefined chemical mixtures and polymers are handled, along with discrete (hydrolyzing and nonhydrolyzing) chemicals. The database manager provides the storage and retrieval of chemical structures with 2D and 3D data, accounting for molecular flexibility by using representative sets of conformers for each chemical. The electronic and geometric structures of all conformers are quantum-chemically optimized and evaluated. Hence, the database contains over 3.7 million 3D records with hundreds of millions of descriptor data items at the levels of structures, conformers, or atoms. The platform contains a highly developed search subsystem--a search is possible on Chemical Abstracts Service numbers; names; 2D and 3D fragment searches; structural, conformational, or atomic properties; affiliation in other chemical databases; structure similarity; logical combinations; saved queries; and search result exports. Models (collections of logically related descriptors) are supported, including information on a model's author, date, bioassay, organs/tissues, conditions, administration, and so forth. Fragments can be interactively constructed using a visual structure editor. A configurable database browser is designed for the inspection and editing of all types of

  15. Representation of chemical information in OASIS centralized 3D database for existing chemicals.

    PubMed

    Nikolov, Nikolai; Grancharov, Vanio; Stoyanova, Galya; Pavlov, Todor; Mekenyan, Ovanes

    2006-01-01

    The present inventory of existing chemicals in regulatory agencies in North America and Europe, encompassing the chemicals of the European Chemicals Bureau (EINECS, with 61 573 discrete chemicals); the Danish EPA (159 448 chemicals); the U.S. EPA (TSCA, 56 882 chemicals; HPVC, 10 546 chemicals) and pesticides' active and inactive ingredients of the U.S. EPA (1379 chemicals); the Organization for Economic Cooperation and Development (HPVC, 4750 chemicals); Environment Canada (DSL, 10851 chemicals); and the Japanese Ministry of Economy, Trade, and Industry (16811), was combined in a centralized 3D database for existing chemicals. The total number of unique chemicals from all of these databases exceeded 185 500. Defined and undefined chemical mixtures and polymers are handled, along with discrete (hydrolyzing and nonhydrolyzing) chemicals. The database manager provides the storage and retrieval of chemical structures with 2D and 3D data, accounting for molecular flexibility by using representative sets of conformers for each chemical. The electronic and geometric structures of all conformers are quantum-chemically optimized and evaluated. Hence, the database contains over 3.7 million 3D records with hundreds of millions of descriptor data items at the levels of structures, conformers, or atoms. The platform contains a highly developed search subsystem--a search is possible on Chemical Abstracts Service numbers; names; 2D and 3D fragment searches; structural, conformational, or atomic properties; affiliation in other chemical databases; structure similarity; logical combinations; saved queries; and search result exports. Models (collections of logically related descriptors) are supported, including information on a model's author, date, bioassay, organs/tissues, conditions, administration, and so forth. Fragments can be interactively constructed using a visual structure editor. A configurable database browser is designed for the inspection and editing of all types of

  16. 3-D representation of aquitard topography using ground-penetrating radar

    SciTech Connect

    Young, R.A.; Sun, Jingsheng

    1995-12-31

    The topography of a clay aquitard is defined by 3D Ground Penetrating Radar (GPR) data at Hill Air Force Base, Utah. Conventional processing augmented by multichannel domain filtering shows a strong reflection from a depth of 20-30 ft despite attenuation by an artificial clay cap approximately 2 ft thick. This reflection correlates very closely with the top of the aquitard as seen in lithology logs at 3 wells crossed by common offset radar profiles from the 3D dataset. Lateral and vertical resolution along the boundary are approximately 2 ft and 1 ft, respectively. The boundary shows abrupt topographic variation of 5 ft over horizontal distances of 20 ft or less and is probably due to vigorous erosion by streams during lowstands of ancient Lake Bonneville. This irregular topography may provide depressions for accumulation of hydrocarbons and chlorinated organic pollutants. A ridge running the length of the survey area may channel movement of ground water and of hydrocarbons trapped at the surface of the water table. Depth slices through a 3D volume, and picked points along the aquitard displayed in depth and relative elevation perspectives provide much more useful visualization than several 2D lines by themselves. The three-dimensional CPR image provides far more detailed definition of geologic boundaries than does projection of soil boring logs into two-dimensional profiles.

  17. Statistical representation of high-dimensional deformation fields with application to statistically constrained 3D warping.

    PubMed

    Xue, Zhong; Shen, Dinggang; Davatzikos, Christos

    2006-10-01

    This paper proposes a 3D statistical model aiming at effectively capturing statistics of high-dimensional deformation fields and then uses this prior knowledge to constrain 3D image warping. The conventional statistical shape model methods, such as the active shape model (ASM), have been very successful in modeling shape variability. However, their accuracy and effectiveness typically drop dramatically in high-dimensionality problems involving relatively small training datasets, which is customary in 3D and 4D medical imaging applications. The proposed statistical model of deformation (SMD) uses wavelet-based decompositions coupled with PCA in each wavelet band, in order to more accurately estimate the pdf of high-dimensional deformation fields, when a relatively small number of training samples are available. SMD is further used as statistical prior to regularize the deformation field in an SMD-constrained deformable registration framework. As a result, more robust registration results are obtained relative to using generic smoothness constraints on deformation fields, such as Laplacian-based regularization. In experiments, we first illustrate the performance of SMD in representing the variability of deformation fields and then evaluate the performance of the SMD-constrained registration, via comparing a hierarchical volumetric image registration algorithm, HAMMER, with its SMD-constrained version, referred to as SMD+HAMMER. This SMD-constrained deformable registration framework can potentially incorporate various registration algorithms to improve robustness and stability via statistical shape constraints.

  18. Performance analysis of different surface reconstruction algorithms for 3D reconstruction of outdoor objects from their digital images.

    PubMed

    Maiti, Abhik; Chakravarty, Debashish

    2016-01-01

    3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality. PMID:27386376

  19. Performance analysis of different surface reconstruction algorithms for 3D reconstruction of outdoor objects from their digital images.

    PubMed

    Maiti, Abhik; Chakravarty, Debashish

    2016-01-01

    3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality.

  20. Recognition of 3-D symmetric objects from range images in automated assembly tasks

    NASA Technical Reports Server (NTRS)

    Alvertos, Nicolas; Dcunha, Ivan

    1990-01-01

    A new technique is presented for the three dimensional recognition of symmetric objects from range images. Beginning from the implicit representation of quadrics, a set of ten coefficients is determined for symmetric objects like spheres, cones, cylinders, ellipsoids, and parallelepipeds. Instead of using these ten coefficients trying to fit them to smooth surfaces (patches) based on the traditional way of determining curvatures, a new approach based on two dimensional geometry is used. For each symmetric object, a unique set of two dimensional curves is obtained from the various angles at which the object is intersected with a plane. Using the same ten coefficients obtained earlier and based on the discriminant method, each of these curves is classified as a parabola, circle, ellipse, or hyperbola. Each symmetric object is found to possess a unique set of these two dimensional curves whereby it can be differentiated from the others. It is shown that instead of using the three dimensional discriminant which involves evaluation of the rank of its matrix, it is sufficient to use the two dimensional discriminant which only requires three arithmetic operations.

  1. The time course of configural change detection for novel 3-D objects.

    PubMed

    Favelle, Simone; Palmisano, Stephen

    2010-05-01

    The present study investigated the time course of visual information processing that is responsible for successful object change detection involving the configuration and shape of 3-D novel object parts. Using a one-shot change detection task, we manipulated stimulus and interstimulus mask durations (40-500 msec). Experiments 1A and 1B showed no change detection advantage for configuration at very short (40-msec) stimulus durations, but the configural advantage did emerge with durations between 80 and 160 msec. In Experiment 2, we showed that, at shorter stimulus durations, the number of parts changing was the best predictor of change detection performance. Finally, in Experiment 3, with a stimulus duration of 160 msec, configuration change detection was found to be highly accurate for each of the mask durations tested, suggesting a fast processing speed for this kind of change information. However, switch and shape change detection reached peak levels of accuracy only when mask durations were increased to 160 and 320 msec, respectively. We conclude that, with very short stimulus exposures, successful object change detection depends primarily on quantitative measures of change. However, with longer stimulus exposures, the qualitative nature of the change becomes progressively more important, resulting in the well-known configural advantage for change detection.

  2. Extraction and classification of 3D objects from volumetric CT data

    NASA Astrophysics Data System (ADS)

    Song, Samuel M.; Kwon, Junghyun; Ely, Austin; Enyeart, John; Johnson, Chad; Lee, Jongkyu; Kim, Namho; Boyd, Douglas P.

    2016-05-01

    We propose an Automatic Threat Detection (ATD) algorithm for Explosive Detection System (EDS) using our multistage Segmentation Carving (SC) followed by Support Vector Machine (SVM) classifier. The multi-stage Segmentation and Carving (SC) step extracts all suspect 3-D objects. The feature vector is then constructed for all extracted objects and the feature vector is classified by the Support Vector Machine (SVM) previously learned using a set of ground truth threat and benign objects. The learned SVM classifier has shown to be effective in classification of different types of threat materials. The proposed ATD algorithm robustly deals with CT data that are prone to artifacts due to scatter, beam hardening as well as other systematic idiosyncrasies of the CT data. Furthermore, the proposed ATD algorithm is amenable for including newly emerging threat materials as well as for accommodating data from newly developing sensor technologies. Efficacy of the proposed ATD algorithm with the SVM classifier is demonstrated by the Receiver Operating Characteristics (ROC) curve that relates Probability of Detection (PD) as a function of Probability of False Alarm (PFA). The tests performed using CT data of passenger bags shows excellent performance characteristics.

  3. A 3D sequence-independent representation of the protein data bank.

    PubMed

    Fischer, D; Tsai, C J; Nussinov, R; Wolfson, H

    1995-10-01

    Here we address the following questions. How many structurally different entries are there in the Protein Data Bank (PDB)? How do the proteins populate the structural universe? To investigate these questions a structurally non-redundant set of representative entries was selected from the PDB. Construction of such a dataset is not trivial: (i) the considerable size of the PDB requires a large number of comparisons (there were more than 3250 structures of protein chains available in May 1994); (ii) the PDB is highly redundant, containing many structurally similar entries, not necessarily with significant sequence homology, and (iii) there is no clear-cut definition of structural similarity. The latter depend on the criteria and methods used. Here, we analyze structural similarity ignoring protein topology. To date, representative sets have been selected either by hand, by sequence comparison techniques which ignore the three-dimensional (3D) structures of the proteins or by using sequence comparisons followed by linear structural comparison (i.e. the topology, or the sequential order of the chains, is enforced in the structural comparison). Here we describe a 3D sequence-independent automated and efficient method to obtain a representative set of protein molecules from the PDB which contains all unique structures and which is structurally non-redundant. The method has two novel features. The first is the use of strictly structural criteria in the selection process without taking into account the sequence information. To this end we employ a fast structural comparison algorithm which requires on average approximately 2 s per pairwise comparison on a workstation. The second novel feature is the iterative application of a heuristic clustering algorithm that greatly reduces the number of comparisons required. We obtain a representative set of 220 chains with resolution better than 3.0 A, or 268 chains including lower resolution entries, NMR entries and models. The

  4. A 3D sequence-independent representation of the protein data bank.

    PubMed

    Fischer, D; Tsai, C J; Nussinov, R; Wolfson, H

    1995-10-01

    Here we address the following questions. How many structurally different entries are there in the Protein Data Bank (PDB)? How do the proteins populate the structural universe? To investigate these questions a structurally non-redundant set of representative entries was selected from the PDB. Construction of such a dataset is not trivial: (i) the considerable size of the PDB requires a large number of comparisons (there were more than 3250 structures of protein chains available in May 1994); (ii) the PDB is highly redundant, containing many structurally similar entries, not necessarily with significant sequence homology, and (iii) there is no clear-cut definition of structural similarity. The latter depend on the criteria and methods used. Here, we analyze structural similarity ignoring protein topology. To date, representative sets have been selected either by hand, by sequence comparison techniques which ignore the three-dimensional (3D) structures of the proteins or by using sequence comparisons followed by linear structural comparison (i.e. the topology, or the sequential order of the chains, is enforced in the structural comparison). Here we describe a 3D sequence-independent automated and efficient method to obtain a representative set of protein molecules from the PDB which contains all unique structures and which is structurally non-redundant. The method has two novel features. The first is the use of strictly structural criteria in the selection process without taking into account the sequence information. To this end we employ a fast structural comparison algorithm which requires on average approximately 2 s per pairwise comparison on a workstation. The second novel feature is the iterative application of a heuristic clustering algorithm that greatly reduces the number of comparisons required. We obtain a representative set of 220 chains with resolution better than 3.0 A, or 268 chains including lower resolution entries, NMR entries and models. The

  5. Neural representation for object recognition in inferotemporal cortex.

    PubMed

    Lehky, Sidney R; Tanaka, Keiji

    2016-04-01

    We suggest that population representation of objects in inferotemporal cortex lie on a continuum between a purely structural, parts-based description and a purely holistic description. The intrinsic dimensionality of object representation is estimated to be around 100, perhaps with lower dimensionalities for object representations more toward the holistic end of the spectrum. Cognitive knowledge in the form of semantic information and task information feed back to inferotemporal cortex from perirhinal and prefrontal cortex respectively, providing high-level multimodal-based expectations that assist in the interpretation of object stimuli. Integration of object information across eye movements may also contribute to object recognition through a process of active vision. PMID:26771242

  6. A neural-network appearance-based 3-D object recognition using independent component analysis.

    PubMed

    Sahambi, H S; Khorasani, K

    2003-01-01

    This paper presents results on appearance-based three-dimensional (3-D) object recognition (3DOR) accomplished by utilizing a neural-network architecture developed based on independent component analysis (ICA). ICA has already been applied for face recognition in the literature with encouraging results. In this paper, we are exploring the possibility of utilizing the redundant information in the visual data to enhance the view based object recognition. The underlying premise here is that since ICA uses high-order statistics, it should in principle outperform principle component analysis (PCA), which does not utilize statistics higher than two, in the recognition task. Two databases of images captured by a CCD camera are used. It is demonstrated that ICA did perform better than PCA in one of the databases, but interestingly its performance was no better than PCA in the case of the second database. Thus, suggesting that the use of ICA may not necessarily always give better results than PCA, and that the application of ICA is highly data dependent. Various factors affecting the differences in the recognition performance using both methods are also discussed. PMID:18237997

  7. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    PubMed

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees' flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots. PMID:26886006

  8. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    PubMed

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees' flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots.

  9. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    PubMed Central

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees’ flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots. PMID:26886006

  10. SEISVIZ3D: Stereoscopic system for the representation of seismic data - Interpretation and Immersion

    NASA Astrophysics Data System (ADS)

    von Hartmann, Hartwig; Rilling, Stefan; Bogen, Manfred; Thomas, Rüdiger

    2015-04-01

    The seismic method is a valuable tool for getting 3D-images from the subsurface. Seismic data acquisition today is not only a topic for oil and gas exploration but is used also for geothermal exploration, inspections of nuclear waste sites and for scientific investigations. The system presented in this contribution may also have an impact on the visualization of 3D-data of other geophysical methods. 3D-seismic data can be displayed in different ways to give a spatial impression of the subsurface.They are a combination of individual vertical cuts, possibly linked to a cubical portion of the data volume, and the stereoscopic view of the seismic data. By these methods, the spatial perception for the structures and thus of the processes in the subsurface should be increased. Stereoscopic techniques are e. g. implemented in the CAVE and the WALL, both of which require a lot of space and high technical effort. The aim of the interpretation system shown here is stereoscopic visualization of seismic data at the workplace, i.e. at the personal workstation and monitor. The system was developed with following criteria in mind: • Fast rendering of large amounts of data so that a continuous view of the data when changing the viewing angle and the data section is possible, • defining areas in stereoscopic view to translate the spatial impression directly into an interpretation, • the development of an appropriate user interface, including head-tracking, for handling the increased degrees of freedom, • the possibility of collaboration, i.e. teamwork and idea exchange with the simultaneous viewing of a scene at remote locations. The possibilities offered by the use of a stereoscopic system do not replace a conventional interpretation workflow. Rather they have to be implemented into it as an additional step. The amplitude distribution of the seismic data is a challenge for the stereoscopic display because the opacity level and the scaling and selection of the data have to

  11. A Dynamic 3D Graphical Representation for RNA Structure Analysis and Its Application in Non-Coding RNA Classification

    PubMed Central

    Dong, Xiaoqing; Fang, Yiliang; Wang, Kejing; Zhu, Lijuan; Wang, Ke; Huang, Tao

    2016-01-01

    With the development of new technologies in transcriptome and epigenetics, RNAs have been identified to play more and more important roles in life processes. Consequently, various methods have been proposed to assess the biological functions of RNAs and thus classify them functionally, among which comparative study of RNA structures is perhaps the most important one. To measure the structural similarity of RNAs and classify them, we propose a novel three dimensional (3D) graphical representation of RNA secondary structure, in which an RNA secondary structure is first transformed into a characteristic sequence based on chemical property of nucleic acids; a dynamic 3D graph is then constructed for the characteristic sequence; and lastly a numerical characterization of the 3D graph is used to represent the RNA secondary structure. We tested our algorithm on three datasets: (1) Dataset I consisting of nine RNA secondary structures of viruses, (2) Dataset II consisting of complex RNA secondary structures including pseudo-knots, and (3) Dataset III consisting of 18 non-coding RNA families. We also compare our method with other nine existing methods using Dataset II and III. The results demonstrate that our method is better than other methods in similarity measurement and classification of RNA secondary structures. PMID:27213271

  12. Past Experience Influences Object Representation in Working Memory

    ERIC Educational Resources Information Center

    Wagar, B.M.; Dixon, M.J.

    2005-01-01

    The nature of object representation in working memory is vital to establishing the capacity of working memory, which in turn shapes the limits of visual cognition and awareness. Although current theories discuss whether representations in working memory are feature-based or object-based, no theory has considered the role of past experience.…

  13. Evidence for similar early but not late representation of possible and impossible objects

    PubMed Central

    Freud, Erez; Hadad, Bat-Sheva; Avidan, Galia; Ganel, Tzvi

    2015-01-01

    The perceptual processes that mediate the ability to efficiently represent object 3D structure are still not fully understood. The current study was aimed to shed light on these processes by utilizing spatially possible and impossible objects that could not be created in real 3D space. Despite being perceived as exceptionally unusual, impossible objects still possess fundamental Gestalt attributes and valid local depth cues that may support their initial successful representation. Based on this notion and on recent findings from our lab, we hypothesized that the initial representation of impossible objects would involve common mechanisms to those mediating typical object perception while the perceived differences between possible and impossible objects would emerge later along the processing hierarchy. In Experiment 1, participants preformed same/different classifications of two markers superimposed on a display containing two objects (possible or impossible). Faster reaction times were observed for displays in which the markers were superimposed on the same object (“object-based benefit”). Importantly, this benefit was similar for possible and impossible objects, suggesting that the representations of the two object categories rely on similar perceptual organization processes. Yet, responses for impossible objects were slower compared to possible objects. Experiment 2 was designed to examine the origin of this effect. Participants classified the location of two markers while exposure duration was manipulated. A similar pattern of performance was found for possible and impossible objects for the short exposure duration, with differences in accuracy between these two types of objects emerging only for longer exposure durations. Overall, these findings provide evidence that the representation of object structure relies on a multi-level process and that object impossibility selectively impairs the rendering of fine-detailed description of object structure. PMID

  14. Evidence for similar early but not late representation of possible and impossible objects.

    PubMed

    Freud, Erez; Hadad, Bat-Sheva; Avidan, Galia; Ganel, Tzvi

    2015-01-01

    The perceptual processes that mediate the ability to efficiently represent object 3D structure are still not fully understood. The current study was aimed to shed light on these processes by utilizing spatially possible and impossible objects that could not be created in real 3D space. Despite being perceived as exceptionally unusual, impossible objects still possess fundamental Gestalt attributes and valid local depth cues that may support their initial successful representation. Based on this notion and on recent findings from our lab, we hypothesized that the initial representation of impossible objects would involve common mechanisms to those mediating typical object perception while the perceived differences between possible and impossible objects would emerge later along the processing hierarchy. In Experiment 1, participants preformed same/different classifications of two markers superimposed on a display containing two objects (possible or impossible). Faster reaction times were observed for displays in which the markers were superimposed on the same object ("object-based benefit"). Importantly, this benefit was similar for possible and impossible objects, suggesting that the representations of the two object categories rely on similar perceptual organization processes. Yet, responses for impossible objects were slower compared to possible objects. Experiment 2 was designed to examine the origin of this effect. Participants classified the location of two markers while exposure duration was manipulated. A similar pattern of performance was found for possible and impossible objects for the short exposure duration, with differences in accuracy between these two types of objects emerging only for longer exposure durations. Overall, these findings provide evidence that the representation of object structure relies on a multi-level process and that object impossibility selectively impairs the rendering of fine-detailed description of object structure.

  15. Software for Building Models of 3D Objects via the Internet

    NASA Technical Reports Server (NTRS)

    Schramer, Tim; Jensen, Jeff

    2003-01-01

    The Virtual EDF Builder (where EDF signifies Electronic Development Fixture) is a computer program that facilitates the use of the Internet for building and displaying digital models of three-dimensional (3D) objects that ordinarily comprise assemblies of solid models created previously by use of computer-aided-design (CAD) programs. The Virtual EDF Builder resides on a Unix-based server computer. It is used in conjunction with a commercially available Web-based plug-in viewer program that runs on a client computer. The Virtual EDF Builder acts as a translator between the viewer program and a database stored on the server. The translation function includes the provision of uniform resource locator (URL) links to other Web-based computer systems and databases. The Virtual EDF builder can be used in two ways: (1) If the client computer is Unix-based, then it can assemble a model locally; the computational load is transferred from the server to the client computer. (2) Alternatively, the server can be made to build the model, in which case the server bears the computational load and the results are downloaded to the client computer or workstation upon completion.

  16. A multi-objective optimization framework to model 3D river and landscape evolution processes

    NASA Astrophysics Data System (ADS)

    Bizzi, Simone; Castelletti, Andrea; Cominola, Andrea; Mason, Emanuele; Paik, Kyungrock

    2013-04-01

    Water and sediment interactions shape hillslopes, regulate soil erosion and sedimentation, and organize river networks. Landscape evolution and river organization occur at various spatial and temporal scale and the understanding and modelling of them is highly complex. The idea of a least action principle governing river networks evolution has been proposed many times as a simpler approach among the ones existing in the literature. These theories assume that river networks, as observed in nature, self-organize and act on soil transportation in order to satisfy a particular "optimality" criterion. Accordingly, river and landscape weathering can be simulated by solving an optimization problem, where the choice of the criterion to be optimized becomes the initial assumption. The comparison between natural river networks and optimized ones verifies the correctness of this initial assumption. Yet, various criteria have been proposed in literature and there is no consensus on which is better able to explain river network features observed in nature like network branching and river bed profile: each one is able to reproduce some river features through simplified modelling of the natural processes, but it fails to characterize the whole complexity (3D and its dynamic) of the natural processes. Some of the criteria formulated in the literature partly conflict: the reason is that their formulation rely on mathematical and theoretical simplifications of the natural system that are suitable for specific spatial and temporal scale but fails to represent the whole processes characterizing landscape evolution. In an attempt to address some of these scientific questions, we tested the suitability of using a multi-objective optimization framework to describe river and landscape evolution in a 3D spatial domain. A synthetic landscape is used to this purpose. Multiple, alternative river network evolutions, corresponding to as many tradeoffs between the different and partly

  17. Ionized Outflows in 3-D Insights from Herbig-Haro Objects and Applications to Nearby AGN

    NASA Technical Reports Server (NTRS)

    Cecil, Gerald

    1999-01-01

    HST shows that the gas distributions of these objects are complex and clump at the limit of resolution. HST spectra have lumpy emission-line profiles, indicating unresolved sub-structure. The advantages of 3D over slits on gas so distributed are: robust flux estimates of various dynamical systems projected along lines of sight, sensitivity to fainter spectral lines that are physical diagnostics (reddening-gas density, T, excitation mechanisms, abundances), and improved prospects for recovery of unobserved dimensions of phase-space. These advantages al- low more confident modeling for more profound inquiry into underlying dynamics. The main complication is the effort required to link multi- frequency datasets that optimally track the energy flow through various phases of the ISM. This tedium has limited the number of objects that have been thoroughly analyzed to the a priori most spectacular systems. For HHO'S, proper-motions constrain the ambient B-field, shock velocity, gas abundances, mass-loss rates, source duty-cycle, and tie-ins with molecular flows. If the shock speed, hence ionization fraction, is indeed small then the ionized gas is a significant part of the flow energetics. For AGN'S, nuclear beaming is a source of ionization ambiguity. Establishing the energetics of the outflow is critical to determining how the accretion disk loses its energy. CXO will provide new constraints (especially spectral) on AGN outflows, and STIS UV-spectroscopy is also constraining cloud properties (although limited by extinction). HHO's show some of the things that we will find around AGN'S. I illustrate these points with results from ground-based and HST programs being pursued with collaborators.

  18. New 3D thermal evolution model for icy bodies application to trans-Neptunian objects

    NASA Astrophysics Data System (ADS)

    Guilbert-Lepoutre, A.; Lasue, J.; Federico, C.; Coradini, A.; Orosei, R.; Rosenberg, E. D.

    2011-05-01

    Context. Thermal evolution models have been developed over the years to investigate the evolution of thermal properties based on the transfer of heat fluxes or transport of gas through a porous matrix, among others. Applications of such models to trans-Neptunian objects (TNOs) and Centaurs has shown that these bodies could be strongly differentiated from the point of view of chemistry (i.e. loss of most volatile ices), as well as from physics (e.g. melting of water ice), resulting in stratified internal structures with differentiated cores and potential pristine material close to the surface. In this context, some observational results, such as the detection of crystalline water ice or volatiles, remain puzzling. Aims: In this paper, we would like to present a new fully three-dimensional thermal evolution model. With this model, we aim to improve determination of the temperature distribution inside icy bodies such as TNOs by accounting for lateral heat fluxes, which have been proven to be important for accurate simulations. We also would like to be able to account for heterogeneous boundary conditions at the surface through various albedo properties, for example, that might induce different local temperature distributions. Methods: In a departure from published modeling approaches, the heat diffusion problem and its boundary conditions are represented in terms of real spherical harmonics, increasing the numerical efficiency by roughly an order of magnitude. We then compare this new model and another 3D model recently published to illustrate the advantages and limits of the new model. We try to put some constraints on the presence of crystalline water ice at the surface of TNOs. Results: The results obtained with this new model are in excellent agreement with results obtained by different groups with various models. Small TNOs could remain primitive unless they are formed quickly (less than 2 Myr) or are debris from the disruption of larger bodies. We find that, for

  19. Laser Transfer of Metals and Metal Alloys for Digital Microfabrication of 3D Objects.

    PubMed

    Zenou, Michael; Sa'ar, Amir; Kotler, Zvi

    2015-09-01

    3D copper logos printed on epoxy glass laminates are demonstrated. The structures are printed using laser transfer of molten metal microdroplets. The example in the image shows letters of 50 µm width, with each letter being taller than the last, from a height of 40 µm ('s') to 190 µm ('l'). The scanning microscopy image is taken at a tilt, and the topographic image was taken using interferometric 3D microscopy, to show the effective control of this technique. PMID:25966320

  20. Method for contour extraction for object representation

    DOEpatents

    Skourikhine, Alexei N.; Prasad, Lakshman

    2005-08-30

    Contours are extracted for representing a pixelated object in a background pixel field. An object pixel is located that is the start of a new contour for the object and identifying that pixel as the first pixel of the new contour. A first contour point is then located on the mid-point of a transition edge of the first pixel. A tracing direction from the first contour point is determined for tracing the new contour. Contour points on mid-points of pixel transition edges are sequentially located along the tracing direction until the first contour point is again encountered to complete tracing the new contour. The new contour is then added to a list of extracted contours that represent the object. The contour extraction process associates regions and contours by labeling all the contours belonging to the same object with the same label.

  1. Representation, indexing, and retrieval of moving objects

    NASA Astrophysics Data System (ADS)

    Ye, Huanzhuo; Gong, Jianya; Li, Deren; Pan, Jianping; Chen, Yumin

    2003-12-01

    Moving objects are complicated to manage because they involve temporal attributes as well as spatial attributes. There are two methods to represent the motion of moving objects, function method and sampling method. Motion state modeling, based on sampling method, can give object's position, orientation and their changes at a specific epoch, and encapsulates all the calculation by object orientation method. A big job is to search the motion state vectors efficiently, which can be performed with the help of 2n index trees. 2n index tree is an efficient index method to multi-dimensional data. Different kinds of motion data retrieval can be transformed to basic searching in 2n index trees. With proper operation algorithm, 2n index trees work well with the indexing and retrieval of moving objects.

  2. Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul

    2014-06-01

    3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.

  3. Dimensionality of object representations in monkey inferotemporal cortex

    PubMed Central

    Lehky, Sidney R.; Kiani, Roozbeh; Esteky, Hossein; Tanaka, Keiji

    2014-01-01

    We have calculated the intrinsic dimensionality of visual object representations in anterior inferotemporal (AIT) cortex, based on responses of a large sample of cells stimulated with photographs of diverse objects. As dimensionality was dependent on data set size, we determined asymptotic dimensionality as both the number of neurons and number of stimulus image approached infinity. Our final dimensionality estimate was 93 (SD: ± 11), indicating that there is basis set of approximately a hundred independent features that characterize the dimensions of neural object space. We believe this is the first estimate of the dimensionality of neural visual representations based on single-cell neurophysiological data. The dimensionality of AIT object representations was much lower than the dimensionality of the stimuli. We suggest that there may be a gradual reduction in the dimensionality of object representations in neural populations going from retina to inferotemporal cortex, as receptive fields become increasingly complex. PMID:25058707

  4. Real-time visualization of 3-D dynamic microscopic objects using optical diffraction tomography.

    PubMed

    Kim, Kyoohyun; Kim, Kyung Sang; Park, Hyunjoo; Ye, Jong Chul; Park, Yongkeun

    2013-12-30

    3-D refractive index (RI) distribution is an intrinsic bio-marker for the chemical and structural information about biological cells. Here we develop an optical diffraction tomography technique for the real-time reconstruction of 3-D RI distribution, employing sparse angle illumination and a graphic processing unit (GPU) implementation. The execution time for the tomographic reconstruction is 0.21 s for 96(3) voxels, which is 17 times faster than that of a conventional approach. We demonstrated the real-time visualization capability with imaging the dynamics of Brownian motion of an anisotropic colloidal dimer and the dynamic shape change in a red blood cell upon shear flow.

  5. Visualizing 3D Objects from 2D Cross Sectional Images Displayed "In-Situ" versus "Ex-Situ"

    ERIC Educational Resources Information Center

    Wu, Bing; Klatzky, Roberta L.; Stetten, George

    2010-01-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to…

  6. Object representation in the human auditory system

    PubMed Central

    Winkler, István; van Zuijen, Titia L.; Sussman, Elyse; Horváth, János; Näätänen, Risto

    2010-01-01

    One important principle of object processing is exclusive allocation. Any part of the sensory input, including the border between two objects, can only belong to one object at a time. We tested whether tones forming a spectro-temporal border between two sound patterns can belong to both patterns at the same time. Sequences were composed of low-, intermediate- and high-pitched tones. Tones were delivered with short onset-to-onset intervals causing the high and low tones to automatically form separate low and high sound streams. The intermediate-pitch tones could be perceived as part of either one or the other stream, but not both streams at the same time. Thus these tones formed a pitch ’border’ between the two streams. The tones were presented in a fixed, cyclically repeating order. Linking the intermediate-pitch tones with the high or the low tones resulted in the perception of two different repeating tonal patterns. Participants were instructed to maintain perception of one of the two tone patterns throughout the stimulus sequences. Occasional changes violated either the selected or the alternative tone pattern, but not both at the same time. We found that only violations of the selected pattern elicited the mismatch negativity event-related potential, indicating that only this pattern was represented in the auditory system. This result suggests that individual sounds are processed as part of only one auditory pattern at a time. Thus tones forming a spectro-temporal border are exclusively assigned to one sound object at any given time, as are spatio-temporal borders in vision. PMID:16836636

  7. Superquadrics objects representation for robot manipulation

    NASA Astrophysics Data System (ADS)

    Silva, Eliana Costa e.; Costa, M. Fernanda; Erlhagen, Wolfram; Bicho, Estela

    2016-06-01

    Superquadric are mathematically quite simple and have the ability to obtain a variety of shapes using low order parameterization. Furthermore they present closed-form equations and therefore can be used in the formulation of robotic movement planning problems, in particular in obstacle-avoidance and grasping constraints. In this paper we explore the modeling of objects using superquadrics. The classical nonlinear optimization problem for fitting shapes is extended by adding nonlinear constraints. The numerical results obtained by two different optimization methods are presented and a comparison of the volume of the superquadrics to the volume of simple ellipsoids is made.

  8. An object-based methodology for knowledge representation in SGML

    SciTech Connect

    Kelsey, R.L.; Hartley, R.T.; Webster, R.B.

    1997-11-01

    An object-based methodology for knowledge representation and its Standard Generalized Markup Language (SGML) implementation is presented. The methodology includes class, perspective domain, and event constructs for representing knowledge within an object paradigm. The perspective construct allows for representation of knowledge from multiple and varying viewpoints. The event construct allows actual use of knowledge to be represented. The SGML implementation of the methodology facilitates usability, structured, yet flexible knowledge design, and sharing and reuse of knowledge class libraries.

  9. Learning Warps Object Representations in the Ventral Temporal Cortex.

    PubMed

    Clarke, Alex; Pell, Philip J; Ranganath, Charan; Tyler, Lorraine K

    2016-07-01

    The human ventral temporal cortex (VTC) plays a critical role in object recognition. Although it is well established that visual experience shapes VTC object representations, the impact of semantic and contextual learning is unclear. In this study, we tracked changes in representations of novel visual objects that emerged after learning meaningful information about each object. Over multiple training sessions, participants learned to associate semantic features (e.g., "made of wood," "floats") and spatial contextual associations (e.g., "found in gardens") with novel objects. fMRI was used to examine VTC activity for objects before and after learning. Multivariate pattern similarity analyses revealed that, after learning, VTC activity patterns carried information about the learned contextual associations of the objects, such that objects with contextual associations exhibited higher pattern similarity after learning. Furthermore, these learning-induced increases in pattern information about contextual associations were correlated with reductions in pattern information about the object's visual features. In a second experiment, we validated that these contextual effects translated to real-life objects. Our findings demonstrate that visual object representations in VTC are shaped by the knowledge we have about objects and show that object representations can flexibly adapt as a consequence of learning with the changes related to the specific kind of newly acquired information. PMID:26967942

  10. Object Representations Maintain Attentional Control Settings across Space and Time

    ERIC Educational Resources Information Center

    Schreij, Daniel; Olivers, Christian N. L.

    2009-01-01

    Previous research has revealed that we create and maintain mental representations for perceived objects on the basis of their spatiotemporal continuity. An important question is what type of information can be maintained within these so-called object files. We provide evidence that object files retain specific attentional control settings for…

  11. EM modelling of arbitrary shaped anisotropic dielectric objects using an efficient 3D leapfrog scheme on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Gansen, A.; Hachemi, M. El; Belouettar, S.; Hassan, O.; Morgan, K.

    2016-09-01

    The standard Yee algorithm is widely used in computational electromagnetics because of its simplicity and divergence free nature. A generalization of the classical Yee scheme to 3D unstructured meshes is adopted, based on the use of a Delaunay primal mesh and its high quality Voronoi dual. This allows the problem of accuracy losses, which are normally associated with the use of the standard Yee scheme and a staircased representation of curved material interfaces, to be circumvented. The 3D dual mesh leapfrog-scheme which is presented has the ability to model both electric and magnetic anisotropic lossy materials. This approach enables the modelling of problems, of current practical interest, involving structured composites and metamaterials.

  12. 360 degree realistic 3D image display and image processing from real objects

    NASA Astrophysics Data System (ADS)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-09-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  13. Simulated and Real Sheet-of-Light 3D Object Scanning Using a-Si:H Thin Film PSD Arrays

    PubMed Central

    Contreras, Javier; Tornero, Josep; Ferreira, Isabel; Martins, Rodrigo; Gomes, Luis; Fortunato, Elvira

    2015-01-01

    A MATLAB/SIMULINK software simulation model (structure and component blocks) has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector) array concept technology before it is further expanded or developed. This simulation allows changing most of its parameters, such as the number of elements in the PSD array, the direction of vision, the viewing/scanning angle, the object rotation, translation, sample/scan/simulation time, etc. In addition, results show for the first time the possibility of scanning an object in 3D when using an a-Si:H thin film 128 PSD array sensor and hardware/software system. Moreover, this sensor technology is able to perform these scans and render 3D objects at high speeds and high resolutions when using a sheet-of-light laser within a triangulation platform. As shown by the simulation, a substantial enhancement in 3D object profile image quality and realism can be achieved by increasing the number of elements of the PSD array sensor as well as by achieving an optimal position response from the sensor since clearly the definition of the 3D object profile depends on the correct and accurate position response of each detector as well as on the size of the PSD array. PMID:26633403

  14. A comparative analysis between active and passive techniques for underwater 3D reconstruction of close-range objects.

    PubMed

    Bianco, Gianfranco; Gallo, Alessandro; Bruno, Fabio; Muzzupappa, Maurizio

    2013-01-01

    In some application fields, such as underwater archaeology or marine biology, there is the need to collect three-dimensional, close-range data from objects that cannot be removed from their site. In particular, 3D imaging techniques are widely employed for close-range acquisitions in underwater environment. In this work we have compared in water two 3D imaging techniques based on active and passive approaches, respectively, and whole-field acquisition. The comparison is performed under poor visibility conditions, produced in the laboratory by suspending different quantities of clay in a water tank. For a fair comparison, a stereo configuration has been adopted for both the techniques, using the same setup, working distance, calibration, and objects. At the moment, the proposed setup is not suitable for real world applications, but it allowed us to conduct a preliminary analysis on the performances of the two techniques and to understand their capability to acquire 3D points in presence of turbidity. The performances have been evaluated in terms of accuracy and density of the acquired 3D points. Our results can be used as a reference for further comparisons in the analysis of other 3D techniques and algorithms. PMID:23966193

  15. A comparative analysis between active and passive techniques for underwater 3D reconstruction of close-range objects.

    PubMed

    Bianco, Gianfranco; Gallo, Alessandro; Bruno, Fabio; Muzzupappa, Maurizio

    2013-08-20

    In some application fields, such as underwater archaeology or marine biology, there is the need to collect three-dimensional, close-range data from objects that cannot be removed from their site. In particular, 3D imaging techniques are widely employed for close-range acquisitions in underwater environment. In this work we have compared in water two 3D imaging techniques based on active and passive approaches, respectively, and whole-field acquisition. The comparison is performed under poor visibility conditions, produced in the laboratory by suspending different quantities of clay in a water tank. For a fair comparison, a stereo configuration has been adopted for both the techniques, using the same setup, working distance, calibration, and objects. At the moment, the proposed setup is not suitable for real world applications, but it allowed us to conduct a preliminary analysis on the performances of the two techniques and to understand their capability to acquire 3D points in presence of turbidity. The performances have been evaluated in terms of accuracy and density of the acquired 3D points. Our results can be used as a reference for further comparisons in the analysis of other 3D techniques and algorithms.

  16. A Comparative Analysis between Active and Passive Techniques for Underwater 3D Reconstruction of Close-Range Objects

    PubMed Central

    Bianco, Gianfranco; Gallo, Alessandro; Bruno, Fabio; Muzzupappa, Maurizio

    2013-01-01

    In some application fields, such as underwater archaeology or marine biology, there is the need to collect three-dimensional, close-range data from objects that cannot be removed from their site. In particular, 3D imaging techniques are widely employed for close-range acquisitions in underwater environment. In this work we have compared in water two 3D imaging techniques based on active and passive approaches, respectively, and whole-field acquisition. The comparison is performed under poor visibility conditions, produced in the laboratory by suspending different quantities of clay in a water tank. For a fair comparison, a stereo configuration has been adopted for both the techniques, using the same setup, working distance, calibration, and objects. At the moment, the proposed setup is not suitable for real world applications, but it allowed us to conduct a preliminary analysis on the performances of the two techniques and to understand their capability to acquire 3D points in presence of turbidity. The performances have been evaluated in terms of accuracy and density of the acquired 3D points. Our results can be used as a reference for further comparisons in the analysis of other 3D techniques and algorithms. PMID:23966193

  17. Development of 3D interactive visual objects using the Scripps Institution of Oceanography's Visualization Center

    NASA Astrophysics Data System (ADS)

    Kilb, D.; Reif, C.; Peach, C.; Keen, C. S.; Smith, B.; Mellors, R. J.

    2003-12-01

    Within the last year scientists and educators at the Scripps Institution of Oceanography (SIO), the Birch Aquarium at Scripps and San Diego State University have collaborated with education specialists to develop 3D interactive graphic teaching modules for use in the classroom and in teacher workshops at the SIO Visualization center (http://siovizcenter.ucsd.edu). The unique aspect of the SIO Visualization center is that the center is designed around a 120 degree curved Panoram floor-to-ceiling screen (8'6" by 28'4") that immerses viewers in a virtual environment. The center is powered by an SGI 3400 Onyx computer that is more powerful, by an order of magnitude in both speed and memory, than typical base systems currently used for education and outreach presentations. This technology allows us to display multiple 3D data layers (e.g., seismicity, high resolution topography, seismic reflectivity, draped interferometric synthetic aperture radar (InSAR) images, etc.) simultaneously, render them in 3D stereo, and take a virtual flight through the data as dictated on the spot by the user. This system can also render snapshots, images and movies that are too big for other systems, and then export smaller size end-products to more commonly used computer systems. Since early 2002, we have explored various ways to provide informal education and outreach focusing on current research presented directly by the researchers doing the work. The Center currently provides a centerpiece for instruction on southern California seismology for K-12 students and teachers for various Scripps education endeavors. Future plans are in place to use the Visualization Center at Scripps for extended K-12 and college educational programs. In particular, we will be identifying K-12 curriculum needs, assisting with teacher education, developing assessments of our programs and products, producing web-accessible teaching modules and facilitating the development of appropriate teaching tools to be

  18. Eccentricity in Images of Circular and Spherical Targets and its Impact to 3D Object Reconstruction

    NASA Astrophysics Data System (ADS)

    Luhmann, T.

    2014-06-01

    This paper discusses a feature of projective geometry which causes eccentricity in the image measurement of circular and spherical targets. While it is commonly known that flat circular targets can have a significant displacement of the elliptical image centre with respect to the true imaged circle centre, it can also be shown that the a similar effect exists for spherical targets. Both types of targets are imaged with an elliptical contour. As a result, if measurement methods based on ellipses are used to detect the target (e.g. best-fit ellipses), the calculated ellipse centre does not correspond to the desired target centre in 3D space. This paper firstly discusses the use and measurement of circular and spherical targets. It then describes the geometrical projection model in order to demonstrate the eccentricity in image space. Based on numerical simulations, the eccentricity in the image is further quantified and investigated. Finally, the resulting effect in 3D space is estimated for stereo and multi-image intersections. It can be stated that the eccentricity is larger than usually assumed, and must be compensated for high-accuracy applications. Spherical targets do not show better results than circular targets. The paper is an updated version of Luhmann (2014) new experimental investigations on the effect of length measurement errors.

  19. Evaluation of iterative sparse object reconstruction from few projections for 3-D rotational coronary angiography.

    PubMed

    Hansis, Eberhard; Schäfer, Dirk; Dössel, Olaf; Grass, Michael

    2008-11-01

    A 3-D reconstruction of the coronary arteries offers great advantages in the diagnosis and treatment of cardiovascular disease, compared to 2-D X-ray angiograms. Besides improved roadmapping, quantitative vessel analysis is possible. Due to the heart's motion, rotational coronary angiography typically provides only 5-10 projections for the reconstruction of each cardiac phase, which leads to a strongly undersampled reconstruction problem. Such an ill-posed problem can be approached with regularized iterative methods. The coronary arteries cover only a small fraction of the reconstruction volume. Therefore, the minimization of the mbiL(1) norm of the reconstructed image, favoring spatially sparse images, is a suitable regularization. Additional problems are overlaid background structures and projection truncation, which can be alleviated by background reduction using a morphological top-hat filter. This paper quantitatively evaluates image reconstruction based on these ideas on software phantom data, in terms of reconstructed absorption coefficients and vessel radii. Results for different algorithms and different input data sets are compared. First results for electrocardiogram-gated reconstruction from clinical catheter-based rotational X-ray coronary angiography are presented. Excellent 3-D image quality can be achieved. PMID:18955171

  20. Tailoring bulk mechanical properties of 3D printed objects of polylactic acid varying internal micro-architecture

    NASA Astrophysics Data System (ADS)

    Malinauskas, Mangirdas; Skliutas, Edvinas; Jonušauskas, Linas; Mizeras, Deividas; Šešok, Andžela; Piskarskas, Algis

    2015-05-01

    Herein we present 3D Printing (3DP) fabrication of structures having internal microarchitecture and characterization of their mechanical properties. Depending on the material, geometry and fill factor, the manufactured objects mechanical performance can be tailored from "hard" to "soft." In this work we employ low-cost fused filament fabrication 3D printer enabling point-by-point structuring of poly(lactic acid) (PLA) with~̴400 µm feature spatial resolution. The chosen architectures are defined as woodpiles (BCC, FCC and 60 deg rotating). The period is chosen to be of 1200 µm corresponding to 800 µm pores. The produced objects structural quality is characterized using scanning electron microscope, their mechanical properties such as flexural modulus, elastic modulus and stiffness are evaluated by measured experimentally using universal TIRAtest2300 machine. Within the limitation of the carried out study we show that the mechanical properties of 3D printed objects can be tuned at least 3 times by only changing the woodpile geometry arrangement, yet keeping the same filling factor and periodicity of the logs. Additionally, we demonstrate custom 3D printed µ-fluidic elements which can serve as cheap, biocompatible and environmentally biodegradable platforms for integrated Lab-On-Chip (LOC) devices.

  1. 3D GeoWall Analysis System for Shuttle External Tank Foreign Object Debris Events

    NASA Technical Reports Server (NTRS)

    Brown, Richard; Navard, Andrew; Spruce, Joseph

    2010-01-01

    An analytical, advanced imaging method has been developed for the initial monitoring and identification of foam debris and similar anomalies that occur post-launch in reference to the space shuttle s external tank (ET). Remote sensing technologies have been used to perform image enhancement and analysis on high-resolution, true-color images collected with the DCS 760 Kodak digital camera located in the right umbilical well of the space shuttle. Improvements to the camera, using filters, have added sharpness/definition to the image sets; however, image review/analysis of the ET has been limited by the fact that the images acquired by umbilical cameras during launch are two-dimensional, and are usually nonreferenceable between frames due to rotation translation of the ET as it falls away from the space shuttle. Use of stereo pairs of these images can enable strong visual indicators that can immediately portray depth perception of damaged areas or movement of fragments between frames is not perceivable in two-dimensional images. A stereoscopic image visualization system has been developed to allow 3D depth perception of stereo-aligned image pairs taken from in-flight umbilical and handheld digital shuttle cameras. This new system has been developed to augment and optimize existing 2D monitoring capabilities. Using this system, candidate sequential image pairs are identified for transformation into stereo viewing pairs. Image orientation is corrected using control points (similar points) between frames to place the two images in proper X-Y viewing perspective. The images are then imported into the WallView stereo viewing software package. The collected control points are used to generate a transformation equation that is used to re-project one image and effectively co-register it to the other image. The co-registered, oriented image pairs are imported into a WallView image set and are used as a 3D stereo analysis slide show. Multiple sequential image pairs can be used

  2. Saccade latency reveals episodic representation of object color.

    PubMed

    Gordon, Robert D

    2014-08-01

    While previous studies suggest that identity, but not color, plays a role in episodic object representation, such studies have typically used tasks in which only identity is relevant, raising the possibility that the results reflect task demands, rather than the general principles that underlie object representation. In the present study, participants viewed a preview display containing one (Experiments 1 and 2) or two (Experiment 3) letters, then viewed a target display containing a single letter, in either the same or a different location. Participants executed an immediate saccade to fixate the target; saccade latency served as the dependent variable. In all experiments, saccade latencies were longer to fixate a target appearing in its previewed location, consistent with a bias to attend to new objects rather than to objects for which episodic representations are being maintained in visual working memory. The results of Experiment 3 further demonstrate, however, that changing target color eliminates these latency differences. The results suggest that color and identity are part of episodic representation even when not task relevant and that examining biases in saccade execution may be a useful approach to studying episodic representation. PMID:24820158

  3. Multi-view and 3D deformable part models.

    PubMed

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ). PMID:26440264

  4. Past experience influences object representation in working memory.

    PubMed

    Wagar, Brandon M; Dixon, Mike J

    2005-04-01

    The nature of object representation in working memory is vital to establishing the capacity of working memory, which in turn shapes the limits of visual cognition and awareness. Although current theories discuss whether representations in working memory are feature-based or object-based, no theory has considered the role of past experience. However, work with humans and non-human primates suggests that once participants learn which features are important for category membership, these diagnostic features become more salient than non-diagnostic features in long-term memory and object recognition. Critically, the brain areas involved in this diagnosticity effect are also recruited during working memory tasks. We report two experiments testing whether a diagnosticity effect exists in working memory; and whether it is present when visual information is encoded into working memory, or if it is the result of maintenance within working memory. Results showed a diagnosticity effect which was present at encoding. Maintenance did not influence the nature of object representation in working memory. These findings show that the meaning we glean from our past experience has a profound influence on the nature of object representation in working memory.

  5. Integrated contextual representation for objects' identities and their locations.

    PubMed

    Gronau, Nurit; Neta, Maital; Bar, Moshe

    2008-03-01

    Visual context plays a prominent role in everyday perception. Contextual information can facilitate recognition of objects within scenes by providing predictions about objects that are most likely to appear in a specific setting, along with the locations that are most likely to contain objects in the scene. Is such identity-related ("semantic") and location-related ("spatial") contextual knowledge represented separately or jointly as a bound representation? We conducted a functional magnetic resonance imaging (fMRI) priming experiment whereby semantic and spatial contextual relations between prime and target object pictures were independently manipulated. This method allowed us to determine whether the two contextual factors affect object recognition with or without interacting, supporting a unified versus independent representations, respectively. Results revealed a Semantic x Spatial interaction in reaction times for target object recognition. Namely, significant semantic priming was obtained when targets were positioned in expected (congruent), but not in unexpected (incongruent), locations. fMRI results showed corresponding interactive effects in brain regions associated with semantic processing (inferior prefrontal cortex), visual contextual processing (parahippocampal cortex), and object-related processing (lateral occipital complex). In addition, activation in fronto-parietal areas suggests that attention and memory-related processes might also contribute to the contextual effects observed. These findings indicate that object recognition benefits from associative representations that integrate information about objects' identities and their locations, and directly modulate activation in object-processing cortical regions. Such context frames are useful in maintaining a coherent and meaningful representation of the visual world, and in providing a platform from which predictions can be generated to facilitate perception and action.

  6. On the Relations between Action Planning, Object Identification, and Motor Representations of Observed Actions and Objects

    ERIC Educational Resources Information Center

    Vainio, Lari; Symes, Ed; Ellis, Rob; Tucker, Mike; Ottoboni, Giovanni

    2008-01-01

    Recent evidence suggests that viewing a static prime object (a hand grasp), can activate action representations that affect the subsequent identification of graspable target objects. The present study explored whether stronger effects on target object identification would occur when the prime object (a hand grasp) was made more action-rich and…

  7. A fast 3-D object recognition algorithm for the vision system of a special-purpose dexterous manipulator

    NASA Technical Reports Server (NTRS)

    Hung, Stephen H. Y.

    1989-01-01

    A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.

  8. Aversive learning modulates cortical representations of object categories.

    PubMed

    Dunsmoor, Joseph E; Kragel, Philip A; Martin, Alex; LaBar, Kevin S

    2014-11-01

    Experimental studies of conditioned learning reveal activity changes in the amygdala and unimodal sensory cortex underlying fear acquisition to simple stimuli. However, real-world fears typically involve complex stimuli represented at the category level. A consequence of category-level representations of threat is that aversive experiences with particular category members may lead one to infer that related exemplars likewise pose a threat, despite variations in physical form. Here, we examined the effect of category-level representations of threat on human brain activation using 2 superordinate categories (animals and tools) as conditioned stimuli. Hemodynamic activity in the amygdala and category-selective cortex was modulated by the reinforcement contingency, leading to widespread fear of different exemplars from the reinforced category. Multivariate representational similarity analyses revealed that activity patterns in the amygdala and object-selective cortex were more similar among exemplars from the threat versus safe category. Learning to fear animate objects was additionally characterized by enhanced functional coupling between the amygdala and fusiform gyrus. Finally, hippocampal activity co-varied with object typicality and amygdala activation early during training. These findings provide novel evidence that aversive learning can modulate category-level representations of object concepts, thereby enabling individuals to express fear to a range of related stimuli.

  9. Aversive Learning Modulates Cortical Representations of Object Categories

    PubMed Central

    Dunsmoor, Joseph E.; Kragel, Philip A.; Martin, Alex; LaBar, Kevin S.

    2014-01-01

    Experimental studies of conditioned learning reveal activity changes in the amygdala and unimodal sensory cortex underlying fear acquisition to simple stimuli. However, real-world fears typically involve complex stimuli represented at the category level. A consequence of category-level representations of threat is that aversive experiences with particular category members may lead one to infer that related exemplars likewise pose a threat, despite variations in physical form. Here, we examined the effect of category-level representations of threat on human brain activation using 2 superordinate categories (animals and tools) as conditioned stimuli. Hemodynamic activity in the amygdala and category-selective cortex was modulated by the reinforcement contingency, leading to widespread fear of different exemplars from the reinforced category. Multivariate representational similarity analyses revealed that activity patterns in the amygdala and object-selective cortex were more similar among exemplars from the threat versus safe category. Learning to fear animate objects was additionally characterized by enhanced functional coupling between the amygdala and fusiform gyrus. Finally, hippocampal activity co-varied with object typicality and amygdala activation early during training. These findings provide novel evidence that aversive learning can modulate category-level representations of object concepts, thereby enabling individuals to express fear to a range of related stimuli. PMID:23709642

  10. Constructing Mental Representations of Complex Three-Dimensional Objects.

    ERIC Educational Resources Information Center

    Aust, Ronald

    This exploratory study investigated whether there are differences between males and females in the strategies used to construct mental representations from three-dimensional objects in a dimensional travel display. A Silicon Graphics IRIS computer was used to create the travel displays and mathematical models were created for each of the objects…

  11. Representation of protein 3D structures in spherical (ρ, ϕ, θ) coordinates and two of its potential applications.

    PubMed

    Reyes, Vicente M

    2011-09-01

    Three-dimensional objects can be represented using cartesian, spherical or cylindrical coordinate systems, among many others. Currently all protein 3D structures in the PDB are in cartesian coordinates. We wanted to explore the possibility that protein 3D structures, especially the globular type (spheroproteins), when represented in spherical coordinates might find useful novel applications. A Fortran program was written to transform protein 3D structure files in cartesian coordinates (x,y,z) to spherical coordinates (ρ, ϕ, θ), with the centroid of the protein molecule as origin. We present here two applications, namely, (1) separation of the protein outer layer (OL) from the inner core (IC); and (2) identifying protrusions and invaginations on the protein surface. In the first application, ϕ and θ were partitioned into suitable intervals and the point with maximum ρ in each such 'ϕ-θ bin' was determined. A suitable cutoff value for ρ is adopted, and for each ϕ-θ bin, all points with ρ values less than the cutoff are considered part of the IC, and those with ρ values equal to or greater than the cutoff are considered part of the OL. We show that this separation procedure is successful as it gives rise to an OL that is significantly more enriched in hydrophilic amino acid residues, and an IC that is significantly more enriched in hydrophobic amino acid residues, as expected. In the second application, the point with maximum ρ in each ϕ-θ bin are sequestered and their frequency distribution constructed (i.e., maximum ρ's sorted from lowest to highest, collected into 1.50Å-intervals, and the frequency in each interval plotted). We show in such plots that invaginations on the protein surface give rise to subpeaks or shoulders on the lagging side of the main peak, while protrusions give rise to similar subpeaks or shoulders, but on the leading side of the main peak. We used the dataset of Laskowski et al. (1996) to demonstrate both applications.

  12. Representation of protein 3D structures in spherical (ρ, ϕ, θ) coordinates and two of its potential applications.

    PubMed

    Reyes, Vicente M

    2011-09-01

    Three-dimensional objects can be represented using cartesian, spherical or cylindrical coordinate systems, among many others. Currently all protein 3D structures in the PDB are in cartesian coordinates. We wanted to explore the possibility that protein 3D structures, especially the globular type (spheroproteins), when represented in spherical coordinates might find useful novel applications. A Fortran program was written to transform protein 3D structure files in cartesian coordinates (x,y,z) to spherical coordinates (ρ, ϕ, θ), with the centroid of the protein molecule as origin. We present here two applications, namely, (1) separation of the protein outer layer (OL) from the inner core (IC); and (2) identifying protrusions and invaginations on the protein surface. In the first application, ϕ and θ were partitioned into suitable intervals and the point with maximum ρ in each such 'ϕ-θ bin' was determined. A suitable cutoff value for ρ is adopted, and for each ϕ-θ bin, all points with ρ values less than the cutoff are considered part of the IC, and those with ρ values equal to or greater than the cutoff are considered part of the OL. We show that this separation procedure is successful as it gives rise to an OL that is significantly more enriched in hydrophilic amino acid residues, and an IC that is significantly more enriched in hydrophobic amino acid residues, as expected. In the second application, the point with maximum ρ in each ϕ-θ bin are sequestered and their frequency distribution constructed (i.e., maximum ρ's sorted from lowest to highest, collected into 1.50Å-intervals, and the frequency in each interval plotted). We show in such plots that invaginations on the protein surface give rise to subpeaks or shoulders on the lagging side of the main peak, while protrusions give rise to similar subpeaks or shoulders, but on the leading side of the main peak. We used the dataset of Laskowski et al. (1996) to demonstrate both applications. PMID

  13. Robust 3D object localization and pose estimation for random bin picking with the 3DMaMa algorithm

    NASA Astrophysics Data System (ADS)

    Skotheim, Øystein; Thielemann, Jens T.; Berge, Asbjørn; Sommerfelt, Arne

    2010-02-01

    Enabling robots to automatically locate and pick up randomly placed and oriented objects from a bin is an important challenge in factory automation, replacing tedious and heavy manual labor. A system should be able to recognize and locate objects with a predefined shape and estimate the position with the precision necessary for a gripping robot to pick it up. We describe a system that consists of a structured light instrument for capturing 3D data and a robust approach for object location and pose estimation. The method does not depend on segmentation of range images, but instead searches through pairs of 2D manifolds to localize candidates for object match. This leads to an algorithm that is not very sensitive to scene complexity or the number of objects in the scene. Furthermore, the strategy for candidate search is easily reconfigurable to arbitrary objects. Experiments reported in this paper show the utility of the method on a general random bin picking problem, in this paper exemplified by localization of car parts with random position and orientation. Full pose estimation is done in less than 380 ms per image. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  14. Detection of hidden objects using a real-time 3-D millimeter-wave imaging system

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon, Avihai; Levanon, Assaf; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, N. S.

    2014-10-01

    Millimeter (mm)and sub-mm wavelengths or terahertz (THz) band have several properties that motivate their use in imaging for security applications such as recognition of hidden objects, dangerous materials, aerosols, imaging through walls as in hostage situations, and also in bad weather conditions. There is no known ionization hazard for biological tissue, and atmospheric degradation of THz radiation is relatively low for practical imaging distances. We recently developed a new technology for the detection of THz radiation. This technology is based on very inexpensive plasma neon indicator lamps, also known as Glow Discharge Detector (GDD), that can be used as very sensitive THz radiation detectors. Using them, we designed and constructed a Focal Plane Array (FPA) and obtained recognizable2-dimensional THz images of both dielectric and metallic objects. Using THz wave it is shown here that even concealed weapons made of dielectric material can be detected. An example is an image of a knife concealed inside a leather bag and also under heavy clothing. Three-dimensional imaging using radar methods can enhance those images since it can allow the isolation of the concealed objects from the body and environmental clutter such as nearby furniture or other people. The GDDs enable direct heterodyning between the electric field of the target signal and the reference signal eliminating the requirement for expensive mixers, sources, and Low Noise Amplifiers (LNAs).We expanded the ability of the FPA so that we are able to obtain recognizable 2-dimensional THz images in real time. We show here that the THz detection of objects in three dimensions, using FMCW principles is also applicable in real time. This imaging system is also shown here to be capable of imaging objects from distances allowing standoff detection of suspicious objects and humans from large distances.

  15. Microwave and camera sensor fusion for the shape extraction of metallic 3D space objects

    NASA Technical Reports Server (NTRS)

    Shaw, Scott W.; Defigueiredo, Rui J. P.; Krishen, Kumar

    1989-01-01

    The vacuum of space presents special problems for optical image sensors. Metallic objects in this environment can produce intense specular reflections and deep shadows. By combining the polarized RCS with an incomplete camera image, it has become possible to better determine the shape of some simple three-dimensional objects. The radar data are used in an iterative procedure that generates successive approximations to the target shape by minimizing the error between computed scattering cross-sections and the observed radar returns. Favorable results have been obtained for simulations and experiments reconstructing plates, ellipsoids, and arbitrary surfaces.

  16. Automatic moving object extraction toward compact video representation

    NASA Astrophysics Data System (ADS)

    Fan, Jianping; Fujita, Gen; Furuie, Makoto; Onoye, Takao; Shirakawa, Isao; Wu, Lide

    2000-02-01

    An automatic object-oriented video segmentation and representation algorithm is proposed, where the local variance contrast and the frame differences contrast are jointly exploited for meaningful moving object extinction because these two visual features can indicate the spatial homogeneity of the gray levels and the temporal coherence of the motion fields efficiently. The 2D entropic thresholding technique and the watershed transformation method are further developed to determine the global feature thresholds adaptively according to the variation of the video components. The obtained video components are first represented by a group of 4 X 4 blocks coarsely, and then the meaningful moving objects are generated by an iterative region-merging procedure according to the spatiotemporal similarity measure. The temporal tracking procedure is further proposed to obtain more semantic moving objects among frames. Therefore, the proposed automatic moving object extraction algorithm can detect the appearance of new objects as well as the disappearance of existing objects efficiently because the correspondence of the video objects among frames is also established. Moreover, an object- oriented video representation and indexing approach is suggested, where both the operation of the camera (i.e., change of the viewpoint) and the birth or death of the individual objects are exploited to detect the breakpoints of the video data and to select the key frames adaptively.

  17. Evaluating the Effectiveness of Organic Chemistry Textbooks in Promoting Representational Fluency and Understanding of 2D-3D Diagrammatic Relationships

    ERIC Educational Resources Information Center

    Kumi, Bryna C.; Olimpo, Jeffrey T.; Bartlett, Felicia; Dixon, Bonnie L.

    2013-01-01

    The use of two-dimensional (2D) representations to communicate and reason about micromolecular phenomena is common practice in chemistry. While experts are adept at using such representations, research suggests that novices often exhibit great difficulty in understanding, manipulating, and translating between various representational forms. When…

  18. Teaching object concepts for XML-based representations.

    SciTech Connect

    Kelsey, R. L.

    2002-01-01

    Students learned about object-oriented design concepts and knowledge representation through the use of a set of toy blocks. The blocks represented a limited and focused domain of knowledge and one that was physical and tangible. The blocks helped the students to better visualize, communicate, and understand the domain of knowledge as well as how to perform object decomposition. The blocks were further abstracted to an engineering design kit for water park design. This helped the students to work on techniques for abstraction and conceptualization. It also led the project from tangible exercises into software and programming exercises. Students employed XML to create object-based knowledge representations and Java to use the represented knowledge. The students developed and implemented software allowing a lay user to design and create their own water slide and then to take a simulated ride on their slide.

  19. Calibration and 3D reconstruction of underwater objects with non-single-view projection model by structured light stereo imaging.

    PubMed

    Wang, Yexin; Negahdaripour, Shahriar; Aykin, Murat D

    2016-08-20

    Establishing the projection model of imaging systems is critical in 3D reconstruction of object shapes from multiple 2D views. When deployed underwater, these are enclosed in waterproof housings with transparent glass ports that generate nonlinear refractions of optical rays at interfaces, leading to invalidation of the commonly assumed single-viewpoint (SVP) model. In this paper, we propose a non-SVP ray tracing model for the calibration of a projector-camera system, employed for 3D reconstruction based on the structured light paradigm. The projector utilizes dot patterns, having established that the contrast loss is less severe than for traditional stripe patterns in highly turbid waters. Experimental results are presented to assess the achieved calibrating accuracy. PMID:27556973

  20. In-hand dexterous manipulation of piecewise-smooth 3-D objects

    SciTech Connect

    Rus, D.

    1999-04-01

    The author presents an algorithm called finger tracking for in-hand manipulation of three-dimensional objects with independent robot fingers. She describes and analyzes the differential control for finger tracking and extends it to on-line continuous control for a set of cooperating robot fingers. She shows experimental data from a simulation. Finally, she discusses global control issues for finger tracking, and computes lower bounds for reorientation by finger tracking. The algorithm is computationally efficient, exact, and takes into consideration the full dynamics of the system.

  1. Insertion of 3-D-primitives in mesh-based representations: towards compact models preserving the details.

    PubMed

    Lafarge, Florent; Keriven, Renaud; Brédif, Mathieu

    2010-07-01

    We propose an original hybrid modeling process of urban scenes that represents 3-D models as a combination of mesh-based surfaces and geometric 3-D-primitives. Meshes describe details such as ornaments and statues, whereas 3-D-primitives code for regular shapes such as walls and columns. Starting from an 3-D-surface obtained by multiview stereo techniques, these primitives are inserted into the surface after being detected. This strategy allows the introduction of semantic knowledge, the simplification of the modeling, and even correction of errors generated by the acquisition process. We design a hierarchical approach exploring different scales of an observed scene. Each level consists first in segmenting the surface using a multilabel energy model optimized by -expansion and then in fitting 3-D-primitives such as planes, cylinders or tori on the obtained partition where relevant. Experiments on real meshes, depth maps and synthetic surfaces show good potential for the proposed approach.

  2. Object representations maintain attentional control settings across space and time.

    PubMed

    Schreij, Daniel; Olivers, Christian N L

    2009-10-01

    Previous research has revealed that we create and maintain mental representations for perceived objects on the basis of their spatiotemporal continuity. An important question is what type of information can be maintained within these so-called object files. We provide evidence that object files retain specific attentional control settings for items presented inside the object, even when it disappears from vision. The objects were entire visual search displays consisting of multiple items moving into and out of view. It was demonstrated that search was speeded when the search target position was repeated from trial to trial, but especially so when spatiotemporal continuity suggested that the entire display was the same object. We conclude that complete spatial attentional biases can be stored in an object file.

  3. Intrinsic spatial shift of local focus metric curves in digital inline holography for accurate 3D morphology measurement of irregular micro-objects

    NASA Astrophysics Data System (ADS)

    Wu, Yingchun; Wu, Xuecheng; Lebrun, Denis; Brunel, Marc; Coëtmellec, Sébastien; Lesouhaitier, Olivier; Chen, Jia; Gréhan, Gérard

    2016-09-01

    A theoretical model of digital inline holography system reveals that the local focus metric curves (FMCs) of different parts of an irregular micro-object present spatial shift in the depth direction which is resulted from the depth shift. Thus, the 3D morphology of an irregular micro-object can be accurately measured using the cross correlation of the local FMCs. This method retrieves the 3D depth information directly, avoiding the uncertainty inherited from the depth position determination. Typical 3D morphology measurements, including the 3D boundary lines of tilted carbon fibers and irregular coal particles, and the 3D swimming gesture of a live Caenorhabdities elegans, are presented.

  4. Using Morphlet-Based Image Representation for Object Detection

    NASA Astrophysics Data System (ADS)

    Gorbatsevich, V. S.; Vizilter, Yu. V.

    2016-06-01

    In this paper, we propose an original method for objects detection based on a special tree-structured image representation - the trees of morphlets. The method provides robust detection of various types of objects in an image without employing a machine learning procedure. Along with a bounding box creation on a detection step, the method makes pre-segmentation, which can be further used for recognition purposes. Another important feature of the proposed approach is that there are no needs to use a running window as well as a features pyramid in order to detect the objects of different sizes.

  5. A supervised method for object-based 3D building change detection on aerial stereo images

    NASA Astrophysics Data System (ADS)

    Qin, R.; Gruen, A.

    2014-08-01

    There is a great demand for studying the changes of buildings over time. The current trend for building change detection combines the orthophoto and DSM (Digital Surface Models). The pixel-based change detection methods are very sensitive to the quality of the images and DSMs, while the object-based methods are more robust towards these problems. In this paper, we propose a supervised method for building change detection. After a segment-based SVM (Support Vector Machine) classification with features extracted from the orthophoto and DSM, we focus on the detection of the building changes of different periods by measuring their height and texture differences, as well as their shapes. A decision tree analysis is used to assess the probability of change for each building segment and the traffic lighting system is used to indicate the status "change", "non-change" and "uncertain change" for building segments. The proposed method is applied to scanned aerial photos of the city of Zurich in 2002 and 2007, and the results have demonstrated that our method is able to achieve high detection accuracy.

  6. Calculations of Arctic ozone chemistry using objectively analyzed data in a 3-D CTM

    NASA Technical Reports Server (NTRS)

    Kaminski, J. W.; Mcconnell, J. C.; Sandilands, J. W.

    1994-01-01

    A three-dimensional chemical transport model (CTM) (Kaminski, 1992) has been used to study the evolution of the Arctic ozone during the winter of 1992. The continuity equation has been solved using a spectral method with Rhomboidal 15 (R15) truncation and leap-frog time stepping. Six-hourly meteorological fields from the Canadian Meteorological Center global objective analysis routines run at T79 were degraded to the model resolution. In addition, they were interpolated to the model time grid and were used to drive the model from the surface to 10 mb. In the model, processing of Cl(x) occurred over Arctic latitudes but some of the initial products were still present by mid-January. Also, the large amounts of ClO formed in the model in early January were converted to ClNO3. The results suggest that the model resolution may be insufficient to resolve the details of the Arctic transport during this time period. In particular, the wind field does not move the ClO(x) 'cloud' to the south over Europe as seen in the MLS measurements.

  7. Aging preserves the ability to perceive 3D object shape from static but not deforming boundary contours.

    PubMed

    Norman, J Farley; Bartholomew, Ashley N; Burton, Cory L

    2008-09-01

    A single experiment investigated how younger (aged 18-32 years) and older (aged 62-82 years) observers perceive 3D object shape from deforming and static boundary contours. On any given trial, observers were shown two smoothly-curved objects, similar to water-smoothed granite rocks, and were required to judge whether they possessed the "same" or "different" shape. The objects presented during the "different" trials produced differently-shaped boundary contours. The objects presented during the "same" trials also produced different boundary contours, because one of the objects was always rotated in depth relative to the other by 5, 25, or 45 degrees. Each observer participated in 12 experimental conditions formed by the combination of 2 motion types (deforming vs. static boundary contours), 2 surface types (objects depicted as silhouettes or with texture and Lambertian shading), and 3 angular offsets (5, 25, and 45 degrees). When there was no motion (static silhouettes or stationary objects presented with shading and texture), the older observers performed as well as the younger observers. In the moving object conditions with shading and texture, the older observers' performance was facilitated by the motion, but the amount of this facilitation was reduced relative to that exhibited by the younger observers. In contrast, the older observers obtained no benefit in performance at all from the deforming (i.e., moving) silhouettes. The reduced ability of older observers to perceive 3D shape from motion is probably due to a low-level deterioration in the ability to detect and discriminate motion itself.

  8. Neural representations of novel objects associated with olfactory experience.

    PubMed

    Ghio, Marta; Schulze, Patrick; Suchan, Boris; Bellebaum, Christian

    2016-07-15

    Object conceptual knowledge comprises information related to several motor and sensory modalities (e.g. for tools, how they look like, how to manipulate them). Whether and to which extent conceptual object knowledge is represented in the same sensory and motor systems recruited during object-specific learning experience is still a controversial question. A direct approach to assess the experience-dependence of conceptual object representations is based on training with novel objects. The present study extended previous research, which focused mainly on the role of manipulation experience for tool-like stimuli, by considering sensory experience only. Specifically, we examined the impact of experience in the non-dominant olfactory modality on the neural representation of novel objects. Sixteen healthy participants visually explored a set of novel objects during the training phase while for each object an odor (e.g., peppermint) was presented (olfactory-visual training). As control conditions, a second set of objects was only visually explored (visual-only training), and a third set was not part of the training. In a post-training fMRI session, participants performed an old/new task with pictures of objects associated with olfactory-visual and visual-only training (old) and no training objects (new). Although we did not find any evidence of activations in primary olfactory areas, the processing of olfactory-visual versus visual-only training objects elicited greater activation in the right anterior hippocampus, a region included in the extended olfactory network. This finding is discussed in terms of different functional roles of the hippocampus in olfactory processes. PMID:27083305

  9. Neural representations of novel objects associated with olfactory experience.

    PubMed

    Ghio, Marta; Schulze, Patrick; Suchan, Boris; Bellebaum, Christian

    2016-07-15

    Object conceptual knowledge comprises information related to several motor and sensory modalities (e.g. for tools, how they look like, how to manipulate them). Whether and to which extent conceptual object knowledge is represented in the same sensory and motor systems recruited during object-specific learning experience is still a controversial question. A direct approach to assess the experience-dependence of conceptual object representations is based on training with novel objects. The present study extended previous research, which focused mainly on the role of manipulation experience for tool-like stimuli, by considering sensory experience only. Specifically, we examined the impact of experience in the non-dominant olfactory modality on the neural representation of novel objects. Sixteen healthy participants visually explored a set of novel objects during the training phase while for each object an odor (e.g., peppermint) was presented (olfactory-visual training). As control conditions, a second set of objects was only visually explored (visual-only training), and a third set was not part of the training. In a post-training fMRI session, participants performed an old/new task with pictures of objects associated with olfactory-visual and visual-only training (old) and no training objects (new). Although we did not find any evidence of activations in primary olfactory areas, the processing of olfactory-visual versus visual-only training objects elicited greater activation in the right anterior hippocampus, a region included in the extended olfactory network. This finding is discussed in terms of different functional roles of the hippocampus in olfactory processes.

  10. Objective Assessment of shoulder mobility with a new 3D gyroscope - a validation study

    PubMed Central

    2011-01-01

    Background Assessment of shoulder mobility is essential for clinical follow-up of shoulder treatment. Only a few high sophisticated instruments for objective measurements of shoulder mobility are available. The interobserver dependency of conventional goniometer measurements is high. In the 1990s an isokinetic measuring system of BIODEX Inc. was introduced, which is a very complex but valid instrument. Since 2008 a new user-friendly system called DynaPort MiniMod TriGyro ShoulderTest-System (DP) is available. Aim of this study is the validation of this measuring instrument using the BIODEX-System. Methods The BIODEX is a computerized robotic dynamometer used for isokinetic testing and training of athletes. Because of its size the system needs to be installed in a separated room. The DP is a small, light-weighted three-dimensional gyroscope that is fixed on the distal upper patient arm, recording abduction, flexion and rotation. For direct comparison we fixed the DP on the lever arm of the BIODEX. The accuracy of measurement was determined at different positions, angles and distances from the centre of rotation (COR) as well as different velocities in a radius between 0° - 180° in steps of 20°. All measurements were repeated 10 times. As satisfactory accuracy a difference between both systems below 5° was defined. The statistical analysis was performed with a linear regression model. Results The evaluation shows very high accuracy of measurements. The maximum average deviation is below 2.1°. For a small range of motion the DP is slightly underestimating comparing the BIODEX, whereas for higher angles increasing positive differences are observed. The distance to the COR as well as the position of the DP on the lever arm have no significant influence. Concerning different motion speeds significant but not relevant influence is detected. Unfortunately device related effects are observed, leading to differences between repeated measurements with any two different

  11. Objective assessment, repeatability, and agreement of shoulder ROM with a 3D gyroscope

    PubMed Central

    2013-01-01

    Background Assessment of shoulder mobility is essential for diagnosis and clinical follow-up of shoulder diseases. Only a few highly sophisticated instruments for objective measurements of shoulder mobility are available. The recently introduced DynaPort MiniMod TriGyro ShoulderTest-System (DP) was validated earlier in laboratory trials. We aimed to assess the precision (repeatability) and agreement of this instrument in human subjects, as compared to the conventional goniometer. Methods The DP is a small, light-weight, three-dimensional gyroscope that can be fixed on the distal upper arm, recording shoulder abduction, flexion, and rotation. Twenty-one subjects (42 shoulders) were included for analysis. Two subsequent assessments of the same subject with a 30-minute delay in testing of each shoulder were performed with the DP in two directions (flexion and abduction), and simultaneously correlated with the measurements of a conventional goniometer. All assessments were performed by one observer. Repeatability for each method was determined and compared as the statistical variance between two repeated measurements. Agreement was illustrated by Bland-Altman-Plots with 95% limits of agreement. Statistical analysis was performed with a linear mixed regression model. Variance for repeated measurements by the same method was also estimated and compared with the likelihood-ratio test. Results Evaluation of abduction showed significantly better repeatability for the DP compared to the conventional goniometer (error variance: DP = 0.89, goniometer = 8.58, p = 0.025). No significant differences were found for flexion (DP = 1.52, goniometer = 5.94, p = 0.09). Agreement assessment was performed for flexion for mean differences of 0.27° with 95% limit of agreement ranging from −7.97° to 8.51°. For abduction, the mean differences were 1.19° with a 95% limit of agreement ranging from −9.07° to 11.46°. Conclusion In summary, DP demonstrated a high precision even higher

  12. Segmentation of complex objects with non-spherical topologies from volumetric medical images using 3D livewire

    NASA Astrophysics Data System (ADS)

    Poon, Kelvin; Hamarneh, Ghassan; Abugharbieh, Rafeef

    2007-03-01

    Segmentation of 3D data is one of the most challenging tasks in medical image analysis. While reliable automatic methods are typically preferred, their success is often hindered by poor image quality and significant variations in anatomy. Recent years have thus seen an increasing interest in the development of semi-automated segmentation methods that combine computational tools with intuitive, minimal user interaction. In an earlier work, we introduced a highly-automated technique for medical image segmentation, where a 3D extension of the traditional 2D Livewire was proposed. In this paper, we present an enhanced and more powerful 3D Livewire-based segmentation approach with new features designed to primarily enable the handling of complex object topologies that are common in biological structures. The point ordering algorithm we proposed earlier, which automatically pairs up seedpoints in 3D, is improved in this work such that multiple sets of points are allowed to simultaneously exist. Point sets can now be automatically merged and split to accommodate for the presence of concavities, protrusions, and non-spherical topologies. The robustness of the method is further improved by extending the 'turtle algorithm', presented earlier, by using a turtle-path pruning step. Tests on both synthetic and real medical images demonstrate the efficiency, reproducibility, accuracy, and robustness of the proposed approach. Among the examples illustrated is the segmentation of the left and right ventricles from a T1-weighted MRI scan, where an average task time reduction of 84.7% was achieved when compared to a user performing 2D Livewire segmentation on every slice.

  13. Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking

    PubMed Central

    Qu, Shiru

    2016-01-01

    Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness. PMID:27630710

  14. Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking

    PubMed Central

    Qu, Shiru

    2016-01-01

    Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness.

  15. Reconstructing representations of dynamic visual objects in early visual cortex

    PubMed Central

    Chong, Edmund; Familiar, Ariana M.; Shim, Won Mok

    2016-01-01

    As raw sensory data are partial, our visual system extensively fills in missing details, creating enriched percepts based on incomplete bottom-up information. Despite evidence for internally generated representations at early stages of cortical processing, it is not known whether these representations include missing information of dynamically transforming objects. Long-range apparent motion (AM) provides a unique test case because objects in AM can undergo changes both in position and in features. Using fMRI and encoding methods, we found that the “intermediate” orientation of an apparently rotating grating, never presented in the retinal input but interpolated during AM, is reconstructed in population-level, feature-selective tuning responses in the region of early visual cortex (V1) that corresponds to the retinotopic location of the AM path. This neural representation is absent when AM inducers are presented simultaneously and when AM is visually imagined. Our results demonstrate dynamic filling-in in V1 for object features that are interpolated during kinetic transformations. PMID:26712004

  16. Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking.

    PubMed

    Yang, Honghong; Qu, Shiru

    2016-01-01

    Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness. PMID:27630710

  17. Characteristics of Haptic Peripersonal Spatial Representation of Object Relations

    PubMed Central

    2016-01-01

    Haptic perception of space is known to show characteristics that are different to actual space. The current study extends on this line of research, investigating whether systematic deviations are also observed in the formation of haptic spatial representations of object-to-object relations. We conducted a haptic spatial reproduction task analogous to the parallelity task with spatial layouts. Three magnets were positioned to form corners of an isosceles triangle and the task of the participant was to reproduce the right angle corner. Weobserved systematic deviations in the reproduction of the right angle triangle. The systematic deviations were not observed when the task was conducted on the mid-sagittal plane. Furthermore, the magnitude of the deviation was decreased when non-informative vision was introduced. These results suggest that there is a deformation in spatial representation of object-to-object relations formed using haptics. However, as no systematic deviation was observed when the task was conducted on the mid-saggital plane, we suggest that the perception of object-to-object relations use a different egocentric reference frame to the perception of orientation. PMID:27462990

  18. Characteristics of Haptic Peripersonal Spatial Representation of Object Relations.

    PubMed

    Wako, Ryo; Ayabe-Kanamura, Saho

    2016-01-01

    Haptic perception of space is known to show characteristics that are different to actual space. The current study extends on this line of research, investigating whether systematic deviations are also observed in the formation of haptic spatial representations of object-to-object relations. We conducted a haptic spatial reproduction task analogous to the parallelity task with spatial layouts. Three magnets were positioned to form corners of an isosceles triangle and the task of the participant was to reproduce the right angle corner. Weobserved systematic deviations in the reproduction of the right angle triangle. The systematic deviations were not observed when the task was conducted on the mid-sagittal plane. Furthermore, the magnitude of the deviation was decreased when non-informative vision was introduced. These results suggest that there is a deformation in spatial representation of object-to-object relations formed using haptics. However, as no systematic deviation was observed when the task was conducted on the mid-saggital plane, we suggest that the perception of object-to-object relations use a different egocentric reference frame to the perception of orientation. PMID:27462990

  19. Reference Frames and 3-D Shape Perception of Pictured Objects: On Verticality and Viewpoint-From-Above

    PubMed Central

    van Doorn, Andrea J.; Wagemans, Johan

    2016-01-01

    Research on the influence of reference frames has generally focused on visual phenomena such as the oblique effect, the subjective visual vertical, the perceptual upright, and ambiguous figures. Another line of research concerns mental rotation studies in which participants had to discriminate between familiar or previously seen 2-D figures or pictures of 3-D objects and their rotated versions. In the present study, we disentangled the influence of the environmental and the viewer-centered reference frame, as classically done, by comparing the performances obtained in various picture and participant orientations. However, this time, the performance is the pictorial relief: the probed 3-D shape percept of the depicted object reconstructed from the local attitude settings of the participant. Comparisons between the pictorial reliefs based on different picture and participant orientations led to two major findings. First, in general, the pictorial reliefs were highly similar if the orientation of the depicted object was vertical with regard to the environmental or the viewer-centered reference frame. Second, a viewpoint-from-above interpretation could almost completely account for the shears occurring between the pictorial reliefs. More specifically, the shears could largely be considered as combinations of slants generated from the viewpoint-from-above, which was determined by the environmental as well as by the viewer-centered reference frame. PMID:27433329

  20. Spun-wrapped aligned nanofiber (SWAN) lithography for fabrication of micro/nano-structures on 3D objects.

    PubMed

    Ye, Zhou; Nain, Amrinder S; Behkam, Bahareh

    2016-07-01

    Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for fabrication of multiscale (nano to microscale) structures on 3D objects without restriction on substrate material and geometry. SWAN lithography combines precise deposition of polymeric nanofiber masks, in aligned single or multilayer configurations, with well-controlled solvent vapor treatment and etching processes to enable high throughput (>10(-7) m(2) s(-1)) and large-area fabrication of sub-50 nm to several micron features with high pattern fidelity. Using this technique, we demonstrate whole-surface nanopatterning of bulk and thin film surfaces of cubes, cylinders, and hyperbola-shaped objects that would be difficult, if not impossible to achieve with existing methods. We demonstrate that the fabricated feature size (b) scales with the fiber mask diameter (D) as b(1.5)∝D. This scaling law is in excellent agreement with theoretical predictions using the Johnson, Kendall, and Roberts (JKR) contact theory, thus providing a rational design framework for fabrication of systems and devices that require precisely designed multiscale features.

  1. Spun-wrapped aligned nanofiber (SWAN) lithography for fabrication of micro/nano-structures on 3D objects.

    PubMed

    Ye, Zhou; Nain, Amrinder S; Behkam, Bahareh

    2016-07-01

    Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for fabrication of multiscale (nano to microscale) structures on 3D objects without restriction on substrate material and geometry. SWAN lithography combines precise deposition of polymeric nanofiber masks, in aligned single or multilayer configurations, with well-controlled solvent vapor treatment and etching processes to enable high throughput (>10(-7) m(2) s(-1)) and large-area fabrication of sub-50 nm to several micron features with high pattern fidelity. Using this technique, we demonstrate whole-surface nanopatterning of bulk and thin film surfaces of cubes, cylinders, and hyperbola-shaped objects that would be difficult, if not impossible to achieve with existing methods. We demonstrate that the fabricated feature size (b) scales with the fiber mask diameter (D) as b(1.5)∝D. This scaling law is in excellent agreement with theoretical predictions using the Johnson, Kendall, and Roberts (JKR) contact theory, thus providing a rational design framework for fabrication of systems and devices that require precisely designed multiscale features. PMID:27283144

  2. Parallel phase-shifting digital holography and its application to high-speed 3D imaging of dynamic object

    NASA Astrophysics Data System (ADS)

    Awatsuji, Yasuhiro; Xia, Peng; Wang, Yexin; Matoba, Osamu

    2016-03-01

    Digital holography is a technique of 3D measurement of object. The technique uses an image sensor to record the interference fringe image containing the complex amplitude of object, and numerically reconstructs the complex amplitude by computer. Parallel phase-shifting digital holography is capable of accurate 3D measurement of dynamic object. This is because this technique can reconstruct the complex amplitude of object, on which the undesired images are not superimposed, form a single hologram. The undesired images are the non-diffraction wave and the conjugate image which are associated with holography. In parallel phase-shifting digital holography, a hologram, whose phase of the reference wave is spatially and periodically shifted every other pixel, is recorded to obtain complex amplitude of object by single-shot exposure. The recorded hologram is decomposed into multiple holograms required for phase-shifting digital holography. The complex amplitude of the object is free from the undesired images is reconstructed from the multiple holograms. To validate parallel phase-shifting digital holography, a high-speed parallel phase-shifting digital holography system was constructed. The system consists of a Mach-Zehnder interferometer, a continuous-wave laser, and a high-speed polarization imaging camera. Phase motion picture of dynamic air flow sprayed from a nozzle was recorded at 180,000 frames per second (FPS) have been recorded by the system. Also phase motion picture of dynamic air induced by discharge between two electrodes has been recorded at 1,000,000 FPS, when high voltage was applied between the electrodes.

  3. Workflows and the Role of Images for Virtual 3d Reconstruction of no Longer Extant Historic Objects

    NASA Astrophysics Data System (ADS)

    Münster, S.

    2013-07-01

    3D reconstruction technologies have gained importance as tools for the research and visualization of no longer extant historic objects during the last decade. Within such reconstruction processes, visual media assumes several important roles: as the most important sources especially for a reconstruction of no longer extant objects, as a tool for communication and cooperation within the production process, as well as for a communication and visualization of results. While there are many discourses about theoretical issues of depiction as sources and as visualization outcomes of such projects, there is no systematic research about the importance of depiction during a 3D reconstruction process and based on empirical findings. Moreover, from a methodological perspective, it would be necessary to understand which role visual media plays during the production process and how it is affected by disciplinary boundaries and challenges specific to historic topics. Research includes an analysis of published work and case studies investigating reconstruction projects. This study uses methods taken from social sciences to gain a grounded view of how production processes would take place in practice and which functions and roles images would play within them. For the investigation of these topics, a content analysis of 452 conference proceedings and journal articles related to 3D reconstruction modeling in the field of humanities has been completed. Most of the projects described in those publications dealt with data acquisition and model building for existing objects. Only a small number of projects focused on structures that no longer or never existed physically. Especially that type of project seems to be interesting for a study of the importance of pictures as sources and as tools for interdisciplinary cooperation during the production process. In the course of the examination the authors of this paper applied a qualitative content analysis for a sample of 26 previously

  4. Spatial object representation and its use in planning eye movements.

    PubMed

    Beauvillain, Cécile; Vergilino-Perez, Dorine; Dükic, Tania

    2005-09-01

    The eye movements we make to look at objects require that the spatial information contained in the object's image on the retina be used to generate a motor command. This process is known as sensorimotor transformation and has been generally addressed using simple point targets. Here, we investigate the sensorimotor transformation involved in planning double saccade sequences directed at one or two objects. Using both visually guided saccades toward stationary objects and objects subjected to intrasaccadic displacements, and memory-guided saccades, we found that the coordinate transformations required to program the second saccade were different for saccades aimed at a new target object and saccades that scanned the same object. While saccades aimed at a new object were updated on the basis of the actual eye position, those that scanned the same object were performed with a fixed amplitude, irrespective of the actual eye position. Our findings demonstrate that different abstract representations of space are used in sensory-to-motor transformations, depending on what action is planned on the objects.

  5. Subjective and Objective Video Quality Assessment of 3D Synthesized Views With Texture/Depth Compression Distortion.

    PubMed

    Liu, Xiangkai; Zhang, Yun; Hu, Sudeng; Kwong, Sam; Kuo, C-C Jay; Peng, Qiang

    2015-12-01

    The quality assessment for synthesized video with texture/depth compression distortion is important for the design, optimization, and evaluation of the multi-view video plus depth (MVD)-based 3D video system. In this paper, the subjective and objective studies for synthesized view assessment are both conducted. First, a synthesized video quality database with texture/depth compression distortion is presented with subjective scores given by 56 subjects. The 140 videos are synthesized from ten MVD sequences with different texture/depth quantization combinations. Second, a full reference objective video quality assessment (VQA) method is proposed concerning about the annoying temporal flicker distortion and the change of spatio-temporal activity in the synthesized video. The proposed VQA algorithm has a good performance evaluated on the entire synthesized video quality database, and is particularly prominent on the subsets which have significant temporal flicker distortion induced by depth compression and view synthesis process. PMID:26292342

  6. Multi-frequency color-marked fringe projection profilometry for fast 3D shape measurement of complex objects.

    PubMed

    Jiang, Chao; Jia, Shuhai; Dong, Jun; Bao, Qingchen; Yang, Jia; Lian, Qin; Li, Dichen

    2015-09-21

    We propose a novel multi-frequency color-marked fringe projection profilometry approach to measure the 3D shape of objects with depth discontinuities. A digital micromirror device projector is used to project a color map consisting of a series of different-frequency color-marked fringe patterns onto the target object. We use a chromaticity curve to calculate the color change caused by the height of the object. The related algorithm to measure the height is also described in this paper. To improve the measurement accuracy, a chromaticity curve correction method is presented. This correction method greatly reduces the influence of color fluctuations and measurement error on the chromaticity curve and the calculation of the object height. The simulation and experimental results validate the utility of our method. Our method avoids the conventional phase shifting and unwrapping process, as well as the independent calculation of the object height required by existing techniques. Thus, it can be used to measure complex and dynamic objects with depth discontinuities. These advantages are particularly promising for industrial applications. PMID:26406621

  7. Colorful holographic display of 3D object based on scaled diffraction by using non-uniform fast Fourier transform

    NASA Astrophysics Data System (ADS)

    Chang, Chenliang; Xia, Jun; Lei, Wei

    2015-03-01

    We proposed a new method to calculate the color computer generated hologram of three-dimensional object in holographic display. The three-dimensional object is composed of several tilted planes which are tilted from the hologram. The diffraction from each tilted plane to the hologram plane is calculated based on the coordinate rotation in Fourier spectrum domains. We used the nonuniform fast Fourier transformation (NUFFT) to calculate the nonuniform sampled Fourier spectrum on the tilted plane after coordinate rotation. By using the NUFFT, the diffraction calculation from tilted plane to the hologram plane with variable sampling rates can be achieved, which overcomes the sampling restriction of FFT in the conventional angular spectrum based method. The holograms of red, green and blue component of the polygon-based object are calculated separately by using our NUFFT based method. Then the color hologram is synthesized by placing the red, green and blue component hologram in sequence. The chromatic aberration caused by the wavelength difference can be solved effectively by restricting the sampling rate of the object in the calculation of each wavelength component. The computer simulation shows the feasibility of our method in calculating the color hologram of polygon-based object. The 3D object can be displayed in color with adjustable size and no chromatic aberration in holographic display system, which can be considered as an important application in the colorful holographic three-dimensional display.

  8. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands.

    PubMed

    Mateo, Carlos M; Gil, Pablo; Torres, Fernando

    2016-05-05

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object's surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand's fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.

  9. Spun-wrapped aligned nanofiber (SWAN) lithography for fabrication of micro/nano-structures on 3D objects

    NASA Astrophysics Data System (ADS)

    Ye, Zhou; Nain, Amrinder S.; Behkam, Bahareh

    2016-06-01

    Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for fabrication of multiscale (nano to microscale) structures on 3D objects without restriction on substrate material and geometry. SWAN lithography combines precise deposition of polymeric nanofiber masks, in aligned single or multilayer configurations, with well-controlled solvent vapor treatment and etching processes to enable high throughput (>10-7 m2 s-1) and large-area fabrication of sub-50 nm to several micron features with high pattern fidelity. Using this technique, we demonstrate whole-surface nanopatterning of bulk and thin film surfaces of cubes, cylinders, and hyperbola-shaped objects that would be difficult, if not impossible to achieve with existing methods. We demonstrate that the fabricated feature size (b) scales with the fiber mask diameter (D) as b1.5 ~ D. This scaling law is in excellent agreement with theoretical predictions using the Johnson, Kendall, and Roberts (JKR) contact theory, thus providing a rational design framework for fabrication of systems and devices that require precisely designed multiscale features.Fabrication of micro/nano-structures on irregularly shaped substrates and three-dimensional (3D) objects is of significant interest in diverse technological fields. However, it remains a formidable challenge thwarted by limited adaptability of the state-of-the-art nanolithography techniques for nanofabrication on non-planar surfaces. In this work, we introduce Spun-Wrapped Aligned Nanofiber (SWAN) lithography, a versatile, scalable, and cost-effective technique for

  10. An object-based methodology for knowledge representation

    SciTech Connect

    Kelsey, R.L.; Hartley, R.T.; Webster, R.B.

    1997-11-01

    An object based methodology for knowledge representation is presented. The constructs and notation to the methodology are described and illustrated with examples. The ``blocks world,`` a classic artificial intelligence problem, is used to illustrate some of the features of the methodology including perspectives and events. Representing knowledge with perspectives can enrich the detail of the knowledge and facilitate potential lines of reasoning. Events allow example uses of the knowledge to be represented along with the contained knowledge. Other features include the extensibility and maintainability of knowledge represented in the methodology.

  11. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands

    PubMed Central

    Mateo, Carlos M.; Gil, Pablo; Torres, Fernando

    2016-01-01

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments. PMID

  12. FastScript3D - A Companion to Java 3D

    NASA Technical Reports Server (NTRS)

    Koenig, Patti

    2005-01-01

    FastScript3D is a computer program, written in the Java 3D(TM) programming language, that establishes an alternative language that helps users who lack expertise in Java 3D to use Java 3D for constructing three-dimensional (3D)-appearing graphics. The FastScript3D language provides a set of simple, intuitive, one-line text-string commands for creating, controlling, and animating 3D models. The first word in a string is the name of a command; the rest of the string contains the data arguments for the command. The commands can also be used as an aid to learning Java 3D. Developers can extend the language by adding custom text-string commands. The commands can define new 3D objects or load representations of 3D objects from files in formats compatible with such other software systems as X3D. The text strings can be easily integrated into other languages. FastScript3D facilitates communication between scripting languages [which enable programming of hyper-text markup language (HTML) documents to interact with users] and Java 3D. The FastScript3D language can be extended and customized on both the scripting side and the Java 3D side.

  13. Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution

    DOE PAGES

    Meddens, Marjolein B. M.; Liu, Sheng; Finnegan, Patrick S.; Edwards, Thayne L.; James, Conrad D.; Lidke, Keith A.

    2016-05-01

    Here, we have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single moleculemore » super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet.« less

  14. Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution.

    PubMed

    Meddens, Marjolein B M; Liu, Sheng; Finnegan, Patrick S; Edwards, Thayne L; James, Conrad D; Lidke, Keith A

    2016-06-01

    We have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single molecule super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet.

  15. Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution

    PubMed Central

    Meddens, Marjolein B. M.; Liu, Sheng; Finnegan, Patrick S.; Edwards, Thayne L.; James, Conrad D.; Lidke, Keith A.

    2016-01-01

    We have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single molecule super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet. PMID:27375939

  16. Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution

    SciTech Connect

    Meddens, Marjolein B. M.; Liu, Sheng; Finnegan, Patrick S.; Edwards, Thayne L.; James, Conrad D.; Lidke, Keith A.

    2016-01-01

    Here, we have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single molecule super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet.

  17. Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution.

    PubMed

    Meddens, Marjolein B M; Liu, Sheng; Finnegan, Patrick S; Edwards, Thayne L; James, Conrad D; Lidke, Keith A

    2016-06-01

    We have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single molecule super-resolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet. PMID:27375939

  18. Object representations at multiple scales from digital elevation models

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens

    2011-01-01

    In the last decade landform classification and mapping has developed as one of the most active areas of geomorphometry. However, translation from continuous models of elevation and its derivatives (slope, aspect, and curvatures) to landform divisions (landforms and landform elements) is filtered by two important concepts: scale and object ontology. Although acknowledged as being important, these two issues have received surprisingly little attention. This contribution provides an overview and prospects of object representation from DEMs as a function of scale. Relationships between object delineation and classification or regionalization are explored, in the context of differences between general and specific geomorphometry. A review of scales issues in geomorphometry—ranging from scale effects to scale optimization techniques—is followed by an analysis of pros and cons of using cells and objects in DEM analysis. Prospects for coupling multi-scale analysis and object delineation are then discussed. Within this context, we propose discrete geomorphometry as a possible approach between general and specific geomorphometry. Discrete geomorphometry would apply to and describe land-surface divisions defined solely by the criteria of homogeneity in respect to a given land-surface parameter or a combination of several parameters. Homogeneity, in its turn, should always be relative to scale. PMID:21760655

  19. Object representations at multiple scales from digital elevation models.

    PubMed

    Drăguţ, Lucian; Eisank, Clemens

    2011-06-15

    In the last decade landform classification and mapping has developed as one of the most active areas of geomorphometry. However, translation from continuous models of elevation and its derivatives (slope, aspect, and curvatures) to landform divisions (landforms and landform elements) is filtered by two important concepts: scale and object ontology. Although acknowledged as being important, these two issues have received surprisingly little attention.This contribution provides an overview and prospects of object representation from DEMs as a function of scale. Relationships between object delineation and classification or regionalization are explored, in the context of differences between general and specific geomorphometry. A review of scales issues in geomorphometry-ranging from scale effects to scale optimization techniques-is followed by an analysis of pros and cons of using cells and objects in DEM analysis. Prospects for coupling multi-scale analysis and object delineation are then discussed. Within this context, we propose discrete geomorphometry as a possible approach between general and specific geomorphometry. Discrete geomorphometry would apply to and describe land-surface divisions defined solely by the criteria of homogeneity in respect to a given land-surface parameter or a combination of several parameters. Homogeneity, in its turn, should always be relative to scale. PMID:21760655

  20. Object representation and magnetic moments in thin alkali films

    NASA Astrophysics Data System (ADS)

    Garrett, Douglas C.

    2008-10-01

    This thesis is broken into two parts a computer vision part and a solid state physics part. In the computer vision part of the thesis (chapters 1 through 5), the concept of an architecture is discussed with a review of what is known about the brain's visual architecture as it applies to object representation. With this in mind we review the two main types of architectures that are used in computer vision for object representation. A specific object representation is then implemented and optimized to solve a problem in object tracking. This representation is then used to derive the fiducial points of a face using two distinct methods. One using evolutionary algorithms and another by a Bayesian analysis of the feature responses drawn from a gallery of faces. The evolved fiducial representation is tested as a facial detection system. It is shown that the Bayesian analysis of facial images gives an entropy measure that can be used to further improve detection results in the facial detection system. In addition, two similarity metrics are explored in the context of facial detection. It is found that a normalized vector dot product substantially outperforms the Euclidean distance measure. The solid state part of the thesis is composed of two self contained chapters. An effort has been made to reduce the redundancies between the material but some will necessarily remain (i.e., short descriptions of the experimental setup). Both chapters deal with the phenomenon of magnetism of atomic impurities in and on thin metal host films. The important difference between the chapters, besides the results, lies in the experimental technique used to measure the magnetism. In chapter 6, thin films of Pb are covered in situ with sub monolayers of V, Mo and Co in the range between 0.01 and 1 monolayers. If the surface impurities are magnetic they will reduce the superconducting transition temperature of the Pb film. From the reduction of Tc the magnetic dephasing rate of the surface

  1. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  2. The role of action representations in thematic object relations

    PubMed Central

    Tsagkaridis, Konstantinos; Watson, Christine E.; Jax, Steven A.; Buxbaum, Laurel J.

    2014-01-01

    A number of studies have explored the role of associative/event-based (thematic) and categorical (taxonomic) relations in the organization of object representations. Recent evidence suggests that thematic information may be particularly important in determining relationships between manipulable artifacts. However, although sensorimotor information is on many accounts an important component of manipulable artifact representations, little is known about the role that action may play during the processing of semantic relationships (particularly thematic relationships) between multiple objects. In this study, we assessed healthy and left hemisphere stroke participants to explore three questions relevant to object relationship processing. First, we assessed whether participants tended to favor thematic relations including action (Th+A, e.g., wine bottle—corkscrew), thematic relationships without action (Th-A, e.g., wine bottle—cheese), or taxonomic relationships (Tax, e.g., wine bottle—water bottle) when choosing between them in an association judgment task with manipulable artifacts. Second, we assessed whether the underlying constructs of event relatedness, action relatedness, and categorical relatedness determined the choices that participants made. Third, we assessed the hypothesis that degraded action knowledge and/or damage to temporo-parietal cortex, a region of the brain associated with the representation of action knowledge, would reduce the influence of action on the choice task. Experiment 1 showed that explicit ratings of event, action, and categorical relatedness were differentially predictive of healthy participants' choices, with action relatedness determining choices between Th+A and Th-A associations above and beyond event and categorical ratings. Experiment 2 focused more specifically on these Th+A vs. Th-A choices and demonstrated that participants with left temporo-parietal lesions, a brain region known to be involved in sensorimotor processing

  3. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  4. [A case of representational dysgraphia and object representational disorder with unilateral spatial neglect].

    PubMed

    Takaiwa, Akiko; Tsuneto, Sumiko; Abe, Hirofumi; Terai, Satoshi; Tagawa, Koichi

    2015-03-01

    We describe the case of a 48-year-old left-handed woman with unilateral neglect from a brain infarction in the area of the right basal ganglia and temporo-parieto-occipital lobe. When a Kanji character was dictated to her, she wrote only the right side (tukuri) of the character. When copying a picture from the visual image of a left-right asymmetrical object, such as the side view of the dog, she drew the tail and a hind leg immediately but was unable to draw a picture of the dog from the left side. We asked her to imagine going around to the opposite side of the imaginary dog and to draw it from that perspective. She easily drew the left side first, resulting in a left-right inverted picture of what she had previously drawn. She then tried to slowly visualize the missing part of her imagery, and was able to draw only the right tip of the missing part. She could not compose a complete picture of the dog. These findings suggested that the impairment was in the imaging of the left side of a character or object and that this was a case of representational dysgraphia and object representational disorder with unilateral spatial neglect. PMID:25846448

  5. Three-dimensional object representation and invariant recognition using continuous distance transform neural networks.

    PubMed

    Tseng, Y H; Hwang, J N; Sheehan, F H

    1997-01-01

    3D object recognition under partial object viewing is a difficult pattern recognition task. In this paper, we introduce a neural-network solution that is robust to partial viewing of objects and noise corruption. This method directly utilizes the acquired 3D data and requires no feature extraction. The object is first parametrically represented by a continuous distance transform neural network (CDTNN) trained by the surface points of the exemplar object. The CDTNN maps any 3D coordinate into a value that corresponds to the distance between the point and the nearest surface point of the object. Therefore, a mismatch between the exemplar object and an unknown object can be easily computed. When encountered with deformed objects, this mismatch information can be backpropagated through the CDTNN to iteratively determine the deformation in terms of affine transform. Application to 3D heart contour delineation and invariant recognition of 3D rigid-body objects is presented.

  6. Object-oriented philosophy in designing adaptive finite-element package for 3D elliptic deferential equations

    NASA Astrophysics Data System (ADS)

    Zhengyong, R.; Jingtian, T.; Changsheng, L.; Xiao, X.

    2007-12-01

    Although adaptive finite-element (AFE) analysis is becoming more and more focused in scientific and engineering fields, its efficient implementations are remain to be a discussed problem as its more complex procedures. In this paper, we propose a clear C++ framework implementation to show the powerful properties of Object-oriented philosophy (OOP) in designing such complex adaptive procedure. In terms of the modal functions of OOP language, the whole adaptive system is divided into several separate parts such as the mesh generation or refinement, a-posterior error estimator, adaptive strategy and the final post processing. After proper designs are locally performed on these separate modals, a connected framework of adaptive procedure is formed finally. Based on the general elliptic deferential equation, little efforts should be added in the adaptive framework to do practical simulations. To show the preferable properties of OOP adaptive designing, two numerical examples are tested. The first one is the 3D direct current resistivity problem in which the powerful framework is efficiently shown as only little divisions are added. And then, in the second induced polarization£¨IP£©exploration case, new adaptive procedure is easily added which adequately shows the strong extendibility and re-usage of OOP language. Finally we believe based on the modal framework adaptive implementation by OOP methodology, more advanced adaptive analysis system will be available in future.

  7. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  8. Object-constrained meshless deformable algorithm for high speed 3D nonrigid registration between CT and CBCT

    SciTech Connect

    Chen Ting; Kim, Sung; Goyal, Sharad; Jabbour, Salma; Zhou Jinghao; Rajagopal, Gunaretnum; Haffty, Bruce; Yue Ning

    2010-01-15

    Purpose: High-speed nonrigid registration between the planning CT and the treatment CBCT data is critical for real time image guided radiotherapy (IGRT) to improve the dose distribution and to reduce the toxicity to adjacent organs. The authors propose a new fully automatic 3D registration framework that integrates object-based global and seed constraints with the grayscale-based ''demons'' algorithm. Methods: Clinical objects were segmented on the planning CT images and were utilized as meshless deformable models during the nonrigid registration process. The meshless models reinforced a global constraint in addition to the grayscale difference between CT and CBCT in order to maintain the shape and the volume of geometrically complex 3D objects during the registration. To expedite the registration process, the framework was stratified into hierarchies, and the authors used a frequency domain formulation to diffuse the displacement between the reference and the target in each hierarchy. Also during the registration of pelvis images, they replaced the air region inside the rectum with estimated pixel values from the surrounding rectal wall and introduced an additional seed constraint to robustly track and match the seeds implanted into the prostate. The proposed registration framework and algorithm were evaluated on 15 real prostate cancer patients. For each patient, prostate gland, seminal vesicle, bladder, and rectum were first segmented by a radiation oncologist on planning CT images for radiotherapy planning purpose. The same radiation oncologist also manually delineated the tumor volumes and critical anatomical structures in the corresponding CBCT images acquired at treatment. These delineated structures on the CBCT were only used as the ground truth for the quantitative validation, while structures on the planning CT were used both as the input to the registration method and the ground truth in validation. By registering the planning CT to the CBCT, a

  9. Large-scale computer-generated absorption holograms of 3D objects: I. Theoretical background and visual concepts

    NASA Astrophysics Data System (ADS)

    Cameron, Colin D.; Payne, Douglas A.; Sheerin, David T.; Slinger, Christopher W.; Phillips, Nicholas J.; Dodd, Adrian K.

    1999-03-01

    Over many years, the subject of computer generation of holograms has been visited in various guises. Historically, the obvious restrictions imposed by computational power and computer generated hologram (CGH) fabrication techniques have placed limits on what can be taken seriously in terms of image complexity. Modern advances in computational hardware and electro-optic systems now permit both the calculation and the manufacture of CGH's of complex 3D objects which fill a significant volume of space. New methods permit the recording to be made within a reasonable timescale. In addition to advancing fixed CGH generation techniques, the motivation for the work reported here includes assessment of design algorithms, modulation strategies and image quality metrics. These results are of relevance for a novel electroholography system, currently under development at DERA Malvern. This paper describes a complete process of data generation, computation, data manipulation and recording leading to practical techniques for the creation of large area CGH's. As a support to the advances in theoretical understanding and computational methods, we describe (in Part II) a new laser plotter technique that enables, in principle, an unlimited size of pixel array to be plotted efficiently with a rigorous estimate of duration of the plot run time. The results reported here are limited to 2048 X 2048 pixels. In this example, the novel switching techniques employed on the laser plotter permit the pixel array to be printed in approximately 1 hour. However, paths towards easily raising the pixel count and its associated printing rate are presented for both the computational engine and laser plotting processes.

  10. Discriminative object tracking via sparse representation and online dictionary learning.

    PubMed

    Xie, Yuan; Zhang, Wensheng; Li, Cuihua; Lin, Shuyang; Qu, Yanyun; Zhang, Yinghua

    2014-04-01

    We propose a robust tracking algorithm based on local sparse coding with discriminative dictionary learning and new keypoint matching schema. This algorithm consists of two parts: the local sparse coding with online updated discriminative dictionary for tracking (SOD part), and the keypoint matching refinement for enhancing the tracking performance (KP part). In the SOD part, the local image patches of the target object and background are represented by their sparse codes using an over-complete discriminative dictionary. Such discriminative dictionary, which encodes the information of both the foreground and the background, may provide more discriminative power. Furthermore, in order to adapt the dictionary to the variation of the foreground and background during the tracking, an online learning method is employed to update the dictionary. The KP part utilizes refined keypoint matching schema to improve the performance of the SOD. With the help of sparse representation and online updated discriminative dictionary, the KP part are more robust than the traditional method to reject the incorrect matches and eliminate the outliers. The proposed method is embedded into a Bayesian inference framework for visual tracking. Experimental results on several challenging video sequences demonstrate the effectiveness and robustness of our approach.

  11. Application of a roughness-length representation to parameterize energy loss in 3-D numerical simulations of large rivers

    NASA Astrophysics Data System (ADS)

    Sandbach, S. D.; Lane, S. N.; Hardy, R. J.; Amsler, M. L.; Ashworth, P. J.; Best, J. L.; Nicholas, A. P.; Orfeo, O.; Parsons, D. R.; Reesink, A. J. H.; Szupiany, R. N.

    2012-12-01

    Recent technological advances in remote sensing have enabled investigation of the morphodynamics and hydrodynamics of large rivers. However, measuring topography and flow in these very large rivers is time consuming and thus often constrains the spatial resolution and reach-length scales that can be monitored. Similar constraints exist for computational fluid dynamics (CFD) studies of large rivers, requiring maximization of mesh- or grid-cell dimensions and implying a reduction in the representation of bedform-roughness elements that are of the order of a model grid cell or less, even if they are represented in available topographic data. These "subgrid" elements must be parameterized, and this paper applies and considers the impact of roughness-length treatments that include the effect of bed roughness due to "unmeasured" topography. CFD predictions were found to be sensitive to the roughness-length specification. Model optimization was based on acoustic Doppler current profiler measurements and estimates of the water surface slope for a variety of roughness lengths. This proved difficult as the metrics used to assess optimal model performance diverged due to the effects of large bedforms that are not well parameterized in roughness-length treatments. However, the general spatial flow patterns are effectively predicted by the model. Changes in roughness length were shown to have a major impact upon flow routing at the channel scale. The results also indicate an absence of secondary flow circulation cells in the reached studied, and suggest simpler two-dimensional models may have great utility in the investigation of flow within large rivers.

  12. Declining object recognition performance in semantic dementia: A case for stored visual object representations.

    PubMed

    Tree, Jeremy J; Playfoot, David

    2015-01-01

    The role of the semantic system in recognizing objects is a matter of debate. Connectionist theories argue that it is impossible for a participant to determine that an object is familiar to them without recourse to a semantic hub; localist theories state that accessing a stored representation of the visual features of the object is sufficient for recognition. We examine this issue through the longitudinal study of two cases of semantic dementia, a neurodegenerative disorder characterized by a progressive degradation of the semantic system. The cases in this paper do not conform to the "common" pattern of object recognition performance in semantic dementia described by Rogers, T. T., Lambon Ralph, M. A., Hodges, J. R., & Patterson, K. (2004). Natural selection: The impact of semantic impairment on lexical and object decision. Cognitive Neuropsychology, 21, 331-352., and show no systematic relationship between severity of semantic impairment and success in object decision. We argue that these data are inconsistent with the connectionist position but can be easily reconciled with localist theories that propose stored structural descriptions of objects outside of the semantic system. PMID:27355607

  13. Image-based 3D modeling for the knowledge and the representation of archaeological dig and pottery: Sant'Omobono and Sarno project's strategies

    NASA Astrophysics Data System (ADS)

    Gianolio, S.; Mermati, F.; Genovese, G.

    2014-06-01

    This paper presents a "standard" method that is being developed by ARESlab of Rome's La Sapienza University for the documentation and the representation of the archaeological artifacts and structures through automatic photogrammetry software. The image-based 3D modeling technique was applied in two projects: in Sarno and in Rome. The first is a small city in Campania region along Via Popilia, known as the ancient way from Capua to Rhegion. The interest in this city is based on the recovery of over 2100 tombs from local necropolis that contained more than 100.000 artifacts collected in "Museo Nazionale Archeologico della Valle del Sarno". In Rome the project regards the archaeological area of Insula Volusiana placed in Forum Boarium close to Sant'Omobono sacred area. During the studies photographs were taken by Canon EOS 5D Mark II and Canon EOS 600D cameras. 3D model and meshes were created in Photoscan software. The TOF-CW Z+F IMAGER® 5006h laser scanner is used to dense data collection of archaeological area of Rome and to make a metric comparison between range-based and image-based techniques. In these projects the IBM as a low-cost technique proved to be a high accuracy improvement if planned correctly and it shown also how it helps to obtain a relief of complex strata and architectures compared to traditional manual documentation methods (e.g. two-dimensional drawings). The multidimensional recording can be used for future studies of the archaeological heritage, especially for the "destructive" character of an excavation. The presented methodology is suitable for the 3D registration and the accuracy of the methodology improved also the scientific value.

  14. Structural properties of spatial representations in blind people: Scanning images constructed from haptic exploration or from locomotion in a 3-D audio virtual environment.

    PubMed

    Afonso, Amandine; Blum, Alan; Katz, Brian F G; Tarroux, Philippe; Borst, Grégoire; Denis, Michel

    2010-07-01

    When people scan mental images, their response times increase linearly with increases in the distance to be scanned, which is generally taken as reflecting the fact that their internal representations incorporate the metric properties of the corresponding objects. In view of this finding, we investigated the structural properties of spatial mental images created from nonvisual sources in three groups (blindfolded sighted, late blind, and congenitally blind). In Experiment 1, blindfolded sighted and late blind participants created metrically accurate spatial representations of a small-scale spatial configuration under both verbal and haptic learning conditions. In Experiment 2, late and congenitally blind participants generated accurate spatial mental images after both verbal and locomotor learning of a full-scale navigable space (created by an immersive audio virtual reality system), whereas blindfolded sighted participants were selectively impaired in their ability to generate precise spatial representations from locomotor experience. These results attest that in the context of a permanent lack of sight, encoding spatial information on the basis of the most reliable currently functional system (the sensorimotor system) is crucial for building a metrically accurate representation of a spatial environment. The results also highlight the potential of spatialized audio-rendering technology for exploring the spatial representations of visually impaired participants.

  15. Structural properties of spatial representations in blind people: Scanning images constructed from haptic exploration or from locomotion in a 3-D audio virtual environment.

    PubMed

    Afonso, Amandine; Blum, Alan; Katz, Brian F G; Tarroux, Philippe; Borst, Grégoire; Denis, Michel

    2010-07-01

    When people scan mental images, their response times increase linearly with increases in the distance to be scanned, which is generally taken as reflecting the fact that their internal representations incorporate the metric properties of the corresponding objects. In view of this finding, we investigated the structural properties of spatial mental images created from nonvisual sources in three groups (blindfolded sighted, late blind, and congenitally blind). In Experiment 1, blindfolded sighted and late blind participants created metrically accurate spatial representations of a small-scale spatial configuration under both verbal and haptic learning conditions. In Experiment 2, late and congenitally blind participants generated accurate spatial mental images after both verbal and locomotor learning of a full-scale navigable space (created by an immersive audio virtual reality system), whereas blindfolded sighted participants were selectively impaired in their ability to generate precise spatial representations from locomotor experience. These results attest that in the context of a permanent lack of sight, encoding spatial information on the basis of the most reliable currently functional system (the sensorimotor system) is crucial for building a metrically accurate representation of a spatial environment. The results also highlight the potential of spatialized audio-rendering technology for exploring the spatial representations of visually impaired participants. PMID:20551339

  16. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The Apollo implementation of PLOT3D uses some of the capabilities of

  17. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The Apollo implementation of PLOT3D uses some of the capabilities of

  18. A fisher vector representation of GPR data for detecting buried objects

    NASA Astrophysics Data System (ADS)

    Karem, Andrew; Khalifa, Amine B.; Frigui, Hichem

    2016-05-01

    We present a new method, based on the Fisher Vector (FV), for detecting buried explosive objects using ground- penetrating radar (GPR) data. First, low-level dense SIFT features are extracted from a grid covering a region of interest (ROIs). ROIs are identified as regions with high-energy along the (down-track, depth) dimensions of the 3-D GPR cube, or with high-energy along the (cross-track, depth) dimensions. Next, we model the training data (in the SIFT feature space) by a mixture of Gaussian components. Then, we construct FV descriptors based on the Fisher Kernel. The Fisher Kernel characterizes low-level features from an ROI by their deviation from a generative model. The deviation is the gradient of the ROI log-likelihood with respect to the generative model parameters. The vectorial representation of all the deviations is called the Fisher Vector. FV is a generalization of the standard Bag of Words (BoW) method, which provides a framework to map a set of local descriptors to a global feature vector. It is more efficient to compute than the BoW since it relies on a significantly smaller codebook. In addition, mapping a GPR signature into one global feature vector using this technique makes it more efficient to classify using simple and fast linear classifiers such as Support Vector Machines. The proposed approach is applied to detect buried explosive objects using GPR data. The selected data were accumulated across multiple dates and multiple test sites by a vehicle mounted mine detector (VMMD) using GPR sensor. This data consist of a diverse set of conventional landmines and other buried explosive objects consisting of varying shapes, metal content, and burial depths. The performance of the proposed approach is analyzed using receiver operating characteristics (ROC) and is compared to other state-of-the-art feature representation methods.

  19. 3D micro-XRF for cultural heritage objects: new analysis strategies for the investigation of the Dead Sea Scrolls.

    PubMed

    Mantouvalou, Ioanna; Wolff, Timo; Hahn, Oliver; Rabin, Ira; Lühl, Lars; Pagels, Marcel; Malzer, Wolfgang; Kanngiesser, Birgit

    2011-08-15

    A combination of 3D micro X-ray fluorescence spectroscopy (3D micro-XRF) and micro-XRF was utilized for the investigation of a small collection of highly heterogeneous, partly degraded Dead Sea Scroll parchment samples from known excavation sites. The quantitative combination of the two techniques proves to be suitable for the identification of reliable marker elements which may be used for classification and provenance studies. With 3D micro-XRF, the three-dimensional nature, i.e. the depth-resolved elemental composition as well as density variations, of the samples was investigated and bromine could be identified as a suitable marker element. It is shown through a comparison of quantitative and semiquantitative values for the bromine content derived using both techniques that, for elements which are homogeneously distributed in the sample matrix, quantification with micro-XRF using a one-layer model is feasible. Thus, the possibility for routine provenance studies using portable micro-XRF instrumentation on a vast amount of samples, even on site, is obtained through this work.

  20. Objective Assessment and Design Improvement of a Staring, Sparse Transducer Array by the Spatial Crosstalk Matrix for 3D Photoacoustic Tomography

    PubMed Central

    Kosik, Ivan; Raess, Avery

    2015-01-01

    Accurate reconstruction of 3D photoacoustic (PA) images requires detection of photoacoustic signals from many angles. Several groups have adopted staring ultrasound arrays, but assessment of array performance has been limited. We previously reported on a method to calibrate a 3D PA tomography (PAT) staring array system and analyze system performance using singular value decomposition (SVD). The developed SVD metric, however, was impractical for large system matrices, which are typical of 3D PAT problems. The present study consisted of two main objectives. The first objective aimed to introduce the crosstalk matrix concept to the field of PAT for system design. Figures-of-merit utilized in this study were root mean square error, peak signal-to-noise ratio, mean absolute error, and a three dimensional structural similarity index, which were derived between the normalized spatial crosstalk matrix and the identity matrix. The applicability of this approach for 3D PAT was validated by observing the response of the figures-of-merit in relation to well-understood PAT sampling characteristics (i.e. spatial and temporal sampling rate). The second objective aimed to utilize the figures-of-merit to characterize and improve the performance of a near-spherical staring array design. Transducer arrangement, array radius, and array angular coverage were the design parameters examined. We observed that the performance of a 129-element staring transducer array for 3D PAT could be improved by selection of optimal values of the design parameters. The results suggested that this formulation could be used to objectively characterize 3D PAT system performance and would enable the development of efficient strategies for system design optimization. PMID:25875177

  1. Modeling of 3-D Object Manipulation by Multi-Joint Robot Fingers under Non-Holonomic Constraints and Stable Blind Grasping

    NASA Astrophysics Data System (ADS)

    Arimoto, Suguru; Yoshida, Morio; Bae, Ji-Hun

    This paper derives a mathematical model that expresses motion of a pair of multi-joint robot fingers with hemi-spherical rigid ends grasping and manipulating a 3-D rigid object with parallel flat surfaces. Rolling contacts arising between finger-ends and object surfaces are taken into consideration and modeled as Pfaffian constraints from which constraint forces emerge tangentially to the object surfaces. Another noteworthy difference of modeling of motion of a 3-D object from that of a 2-D object is that the instantaneous axis of rotation of the object is fixed in the 2-D case but that is time-varying in the 3-D case. A further difficulty that has prevented us to model 3-D physical interactions between a pair of fingers and a rigid object lies in the problem of treating spinning motion that may arise around the opposing axis from a contact point between one finger-end with one side of the object to another contact point. This paper shows that, once such spinning motion stops as the object mass center approaches just beneath the opposition axis, then this cease of spinning evokes a further nonholonomic constraint. Hence, the multi-body dynamics of the overall fingers-object system is subject to non-holonomic constraints concerning a 3-D orthogonal matrix expressing three mutually orthogonal unit vectors fixed at the object together with an extra non-holonomic constraint that the instantaneous axis of rotation of the object is always orthogonal to the opposing axis. It is shown that Lagrange's equation of motion of the overall system can be derived without violating the causality that governs the non-holonomic constraints. This immediately suggests possible construction of a numerical simulator of multi-body dynamics that can express motion of the fingers and object physically interactive to each other. By referring to the fact that human grasp an object in the form of precision prehension dynamically and stably by using opposable force between the thumb and another

  2. Object Representation in Infants' Coordination of Manipulative Force

    ERIC Educational Resources Information Center

    Mash, Clay

    2007-01-01

    This study examined infants' use of object knowledge for scaling the manipulative force of object-directed actions. Infants 9, 12, and 15 months of age were outfitted with motion-analysis sensors on their arms and then presented with stimulus objects to examine individually over a series of familiarization trials. Two stimulus objects were used in…

  3. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    NASA Astrophysics Data System (ADS)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera

  4. Behaviorally Relevant Abstract Object Identity Representation in the Human Parietal Cortex

    PubMed Central

    Jeong, Su Keun

    2016-01-01

    The representation of object identity is fundamental to human vision. Using fMRI and multivoxel pattern analysis, here we report the representation of highly abstract object identity information in human parietal cortex. Specifically, in superior intraparietal sulcus (IPS), a region previously shown to track visual short-term memory capacity, we found object identity representations for famous faces varying freely in viewpoint, hairstyle, facial expression, and age; and for well known cars embedded in different scenes, and shown from different viewpoints and sizes. Critically, these parietal identity representations were behaviorally relevant as they closely tracked the perceived face-identity similarity obtained in a behavioral task. Meanwhile, the task-activated regions in prefrontal and parietal cortices (excluding superior IPS) did not exhibit such abstract object identity representations. Unlike previous studies, we also failed to observe identity representations in posterior ventral and lateral visual object-processing regions, likely due to the greater amount of identity abstraction demanded by our stimulus manipulation here. Our MRI slice coverage precluded us from examining identity representation in anterior temporal lobe, a likely region for the computing of identity information in the ventral region. Overall, we show that human parietal cortex, part of the dorsal visual processing pathway, is capable of holding abstract and complex visual representations that are behaviorally relevant. These results argue against a “content-poor” view of the role of parietal cortex in attention. Instead, the human parietal cortex seems to be “content rich” and capable of directly participating in goal-driven visual information representation in the brain. SIGNIFICANCE STATEMENT The representation of object identity (including faces) is fundamental to human vision and shapes how we interact with the world. Although object representation has traditionally been

  5. On-line object feature extraction for multispectral scene representation

    NASA Technical Reports Server (NTRS)

    Ghassemian, Hassan; Landgrebe, David

    1988-01-01

    A new on-line unsupervised object-feature extraction method is presented that reduces the complexity and costs associated with the analysis of the multispectral image data and data transmission, storage, archival and distribution. The ambiguity in the object detection process can be reduced if the spatial dependencies, which exist among the adjacent pixels, are intelligently incorporated into the decision making process. The unity relation was defined that must exist among the pixels of an object. Automatic Multispectral Image Compaction Algorithm (AMICA) uses the within object pixel-feature gradient vector as a valuable contextual information to construct the object's features, which preserve the class separability information within the data. For on-line object extraction the path-hypothesis and the basic mathematical tools for its realization are introduced in terms of a specific similarity measure and adjacency relation. AMICA is applied to several sets of real image data, and the performance and reliability of features is evaluated.

  6. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  7. The "What" and "Where" of Object Representations in Infancy.

    ERIC Educational Resources Information Center

    Mareschal, Denis; Johnson, Mark H.

    2003-01-01

    Tested 4-month-olds' memory for surface feature and location information following brief occlusions. Found that when target objects were images of female faces or monochromatic asterisks, infants increased looking times following changes in identity or color but not changes in location or combinations of feature and location. When objects were…

  8. Matching categorical object representations in inferior temporal cortex of man and monkey.

    PubMed

    Kriegeskorte, Nikolaus; Mur, Marieke; Ruff, Douglas A; Kiani, Roozbeh; Bodurka, Jerzy; Esteky, Hossein; Tanaka, Keiji; Bandettini, Peter A

    2008-12-26

    Inferior temporal (IT) object representations have been intensively studied in monkeys and humans, but representations of the same particular objects have never been compared between the species. Moreover, IT's role in categorization is not well understood. Here, we presented monkeys and humans with the same images of real-world objects and measured the IT response pattern elicited by each image. In order to relate the representations between the species and to computational models, we compare response-pattern dissimilarity matrices. IT response patterns form category clusters, which match between man and monkey. The clusters correspond to animate and inanimate objects; within the animate objects, faces and bodies form subclusters. Within each category, IT distinguishes individual exemplars, and the within-category exemplar similarities also match between the species. Our findings suggest that primate IT across species may host a common code, which combines a categorical and a continuous representation of objects. PMID:19109916

  9. Coherent digital demodulation of single-camera N-projections for 3D-object shape measurement: co-phased profilometry.

    PubMed

    Servin, M; Garnica, G; Estrada, J C; Quiroga, A

    2013-10-21

    Fringe projection profilometry is a well-known technique to digitize 3-dimensional (3D) objects and it is widely used in robotic vision and industrial inspection. Probably the single most important problem in single-camera, single-projection profilometry are the shadows and specular reflections generated by the 3D object under analysis. Here a single-camera along with N-fringe-projections is (digital) coherent demodulated in a single-step, solving the shadows and specular reflections problem. Co-phased profilometry coherently phase-demodulates a whole set of N-fringe-pattern perspectives in a single demodulation and unwrapping process. The mathematical theory behind digital co-phasing N-fringe-patterns is mathematically similar to co-phasing a segmented N-mirror telescope.

  10. Optical display of magnified, real and orthoscopic 3-D object images by moving-direct-pixel-mapping in the scalable integral-imaging system

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Piao, Yongri; Kim, Eun-Soo

    2011-10-01

    In this paper, we proposed a novel approach for reconstruction of the magnified, real and orthoscopic three-dimensional (3-D) object images by using the moving-direct-pixel-mapping (MDPM) method in the MALT(moving-array-lenslet-technique)-based scalable integral-imaging system. In the proposed system, multiple sets of elemental image arrays (EIAs) are captured with the MALT, and these picked-up EIAs are computationally transformed into the depth-converted ones by using the proposed MDPM method. Then, these depth-converted EIAs are combined and interlaced together to form an enlarged EIA, from which a magnified, real and orthoscopic 3-D object images can be optically displayed without any degradation of resolution. Good experimental results finally confirmed the feasibility of the proposed method.

  11. RAG-3D: A search tool for RNA 3D substructures

    DOE PAGES

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally describedmore » in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.« less

  12. RAG-3D: A search tool for RNA 3D substructures

    SciTech Connect

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.

  13. RAG-3D: a search tool for RNA 3D substructures

    PubMed Central

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-01-01

    To address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding. PMID:26304547

  14. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  15. Multidimensional representation of objects--The influence of task demands.

    PubMed

    Goldfarb, L; Sabah, K

    2016-04-01

    In our daily life, we often encounter situations in which different features of several multidimensional objects must be perceived simultaneously. There are two types of environments of this kind: environments with multidimensional objects that have unique feature associations, and environments with multidimensional objects that have mixed feature associations. Recently, we (Goldfarb & Treisman, 2013) described the association effect, suggesting that the latter type causes behavioral perception difficulties. In the present study, we investigated this effect further by examining whether the effect is determined via a feedforward visual path or via a high-order task demand component. In order to test this question, in Experiment 1 a set of multidimensional objects were presented while we manipulated the letter case of a target feature, thus creating a visually different but semantically equivalent object, in terms of its identity. Similarly, in Experiment 2 artificial groups with different physical properties were created according to the task demands. The results indicated that the association effect is determined by the task demands, which create the group of reference. The importance of high-order task demand components in the association effect is further discussed, as well as the possible role of the neural synchrony of object files in explaining this effect. PMID:26163190

  16. Is a pre-change object representation weakened under correct detection of a change?

    PubMed

    Yeh, Yei-Yu; Yang, Cheng-Ta

    2009-03-01

    We investigated whether a pre-change representation is inhibited or weakened under correct change detection. Two arrays of six objects were rapidly presented for change detection in three experiments. After detection, the perceptual identification of degraded stimuli was tested in Experiments 1 and 2. The weakening of a pre-change representation was not observed under correct detection. The repetition priming effect was observed for a pre-change object and the magnitude was equivalent to the effect for a post-change object. Under change blindness, repetition priming for a pre-change representation was observed when detection did not require report of location in Experiment 1 and was not observed when location was required to be reported in Experiment 2. The results of Experiment 3 showed that a pre-change representation was recognized at a higher rate under correct detection than under change blindness, reflecting a stronger rather than a weaker pre-change representation in the former context.

  17. Transforming clinical imaging and 3D data for virtual reality learning objects: HTML5 and mobile devices implementation.

    PubMed

    Trelease, Robert B; Nieder, Gary L

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android tablets. This article describes complementary methods for creating comparable, multiplatform VR learning objects in the new HTML5 standard format, circumventing platform-specific limitations imposed by the QuickTime VR multimedia file format. Multiple types or "dimensions" of anatomical information can be embedded in such learning objects, supporting different kinds of online learning applications, including interactive atlases, examination questions, and complex, multi-structure presentations. Such HTML5 VR learning objects are usable on new mobile devices that do not support QuickTime VR, as well as on personal computers. Furthermore, HTML5 VR learning objects can be embedded in "ebook" document files, supporting the development of new types of electronic textbooks on mobile devices that are increasingly popular and self-adopted for mobile learning.

  18. Transforming clinical imaging and 3D data for virtual reality learning objects: HTML5 and mobile devices implementation.

    PubMed

    Trelease, Robert B; Nieder, Gary L

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android tablets. This article describes complementary methods for creating comparable, multiplatform VR learning objects in the new HTML5 standard format, circumventing platform-specific limitations imposed by the QuickTime VR multimedia file format. Multiple types or "dimensions" of anatomical information can be embedded in such learning objects, supporting different kinds of online learning applications, including interactive atlases, examination questions, and complex, multi-structure presentations. Such HTML5 VR learning objects are usable on new mobile devices that do not support QuickTime VR, as well as on personal computers. Furthermore, HTML5 VR learning objects can be embedded in "ebook" document files, supporting the development of new types of electronic textbooks on mobile devices that are increasingly popular and self-adopted for mobile learning. PMID:23212750

  19. Repetition Blindness Reveals Differences between the Representations of Manipulable and Nonmanipulable Objects

    ERIC Educational Resources Information Center

    Harris, Irina M.; Murray, Alexandra M.; Hayward, William G.; O'Callaghan, Claire; Andrews, Sally

    2012-01-01

    We used repetition blindness to investigate the nature of the representations underlying identification of manipulable objects. Observers named objects presented in rapid serial visual presentation streams containing either manipulable or nonmanipulable objects. In half the streams, 1 object was repeated. Overall accuracy was lower when streams…

  20. Transforming Clinical Imaging and 3D Data for Virtual Reality Learning Objects: HTML5 and Mobile Devices Implementation

    ERIC Educational Resources Information Center

    Trelease, Robert B.; Nieder, Gary L.

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android…

  1. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  2. 3D-Modeling of deformed halite hopper crystals: Object based image analysis and support vector machine, a first evaluation

    NASA Astrophysics Data System (ADS)

    Leitner, Christoph; Hofmann, Peter; Marschallinger, Robert

    2014-05-01

    Halite hopper crystals are thought to develop by displacive growth in unconsolidated mud (Gornitz & Schreiber, 1984). The Alpine Haselgebirge, but also e.g. the salt deposits of the Rhine graben (mined at the beginning of the 20th century), comprise hopper crystals with shapes of cuboids, parallelepipeds and rhombohedrons (Görgey, 1912). Obviously, they deformed under oriented stress, which had been tried to reconstruct with respect to the sedimentary layering (Leitner et al., 2013). In the present work, deformed halite hopper crystals embedded in mudrock were automated reconstructed. Object based image analysis (OBIA) has been used successfully in remote sensing for 2D images before. The present study represents the first time that the method was used for reconstruction of three dimensional geological objects. First, manually a reference (gold standard) was created by redrawing contours of the halite crystals on each HRXCT scanning slice. Then, for OBIA, the computer program eCognition was used. For the automated reconstruction a rule set was developed. Thereby, the strength of OBIA was to recognize all objects similar to halite hopper crystals and in particular to eliminate cracks. In a second step, all the objects unsuitable for a structural deformation analysis were dismissed using a support vector machine (SVM) (clusters, polyhalite-coated crystals and spherical halites) The SVM simultaneously drastically reduced the number of halites. From 184 OBIA-objects 67 well shaped remained, which comes close to the number of pre-selected 52 objects. To assess the accuracy of the automated reconstruction, the result before and after SVM was compared to the reference, i.e. the gold standard. State-of the art per-scene statistics were extended to a per-object statistics. Görgey R (1912) Zur Kenntnis der Kalisalzlager von Wittelsheim im Ober-Elsaß. Tschermaks Mineral Petrogr Mitt 31:339-468 Gornitz VM, Schreiber BC (1981) Displacive halite hoppers from the dead sea

  3. A Double-Dissociation in Infants' Representations of Object Arrays

    ERIC Educational Resources Information Center

    Feigenson, L.

    2005-01-01

    Previous studies show that infants can compute either the total continuous extent (e.g. Clearfield, M.W., & Mix, K.S. (1999). Number versus contour length in infants' discrimination of small visual sets. Psychological Science, 10(5), 408-411; Feigenson, L., & Carey, S. (2003). Tracking individuals via object-files: evidence from infants' manual…

  4. Mechanisms Underlying the Emergence of Object Representations during Infancy

    ERIC Educational Resources Information Center

    Scott, Lisa S.

    2011-01-01

    The effects of individual versus category training, using behavioral indices of stimulus discrimination and neural ERPs indices of holistic processing, were examined in infants. Following pretraining assessments at 6 months, infants were sent home with training books of objects for 3 months. One group of infants was trained with six different…

  5. 3D Visualization Development of SIUE Campus

    NASA Astrophysics Data System (ADS)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  6. 3D Multi-Object Segmentation of Cardiac MSCT Imaging by using a Multi-Agent Approach

    PubMed Central

    Fleureau, Julien; Garreau, Mireille; Boulmier, Dominique; Hernandez, Alfredo

    2007-01-01

    We propose a new technique for general purpose, semi-interactive and multi-object segmentation in N-dimensional images, applied to the extraction of cardiac structures in MultiSlice Computed Tomography (MSCT) imaging. The proposed approach makes use of a multi-agent scheme combined with a supervised classification methodology allowing the introduction of a priori information and presenting fast computing times. The multi-agent system is organised around a communicating agent which manages a population of situated agents which segment the image through cooperative and competitive interactions. The proposed technique has been tested on several patient data sets. Some typical results are finally presented and discussed. PMID:18003382

  7. An application of object-oriented knowledge representation to engineering expert systems

    NASA Technical Reports Server (NTRS)

    Logie, D. S.; Kamil, H.; Umaretiya, J. R.

    1990-01-01

    The paper describes an object-oriented knowledge representation and its application to engineering expert systems. The object-oriented approach promotes efficient handling of the problem data by allowing knowledge to be encapsulated in objects and organized by defining relationships between the objects. An Object Representation Language (ORL) was implemented as a tool for building and manipulating the object base. Rule-based knowledge representation is then used to simulate engineering design reasoning. Using a common object base, very large expert systems can be developed, comprised of small, individually processed, rule sets. The integration of these two schemes makes it easier to develop practical engineering expert systems. The general approach to applying this technology to the domain of the finite element analysis, design, and optimization of aerospace structures is discussed.

  8. Neuronal representation of occluded objects in the human brain.

    PubMed

    Olson, Ingrid R; Gatenby, J Christopher; Leung, Hoi Chung; Skudlarski, Pawel; Gore, John C

    2004-01-01

    Occluding surfaces frequently obstruct the object of interest yet are easily dealt with by the visual system. Here, we test whether neural areas known to participate in motion perception and eye movements are regions that also process occluded motion. Functional magnetic resonance imaging (fMRI) was used to assess brain activation while subjects watched a moving ball become occluded. Areas activated during occluded motion included the intraparietal sulcus (IPS) as well as middle temporal (MT) regions analogous to monkey MT/MST. A second experiment showed that these results were not due to motor activity. These findings suggest that human cortical regions involved in perceiving occluded motion are similar to regions that process real motion and regions responsible for eye movements. The intraparietal sulcus may be involved in predicting the location of an unseen target for future hand or eye movements. PMID:14615079

  9. Priming Contour-Deleted Images: Evidence for Immediate Representations in Visual Object Recognition.

    ERIC Educational Resources Information Center

    Biederman, Irving; Cooper, Eric E.

    1991-01-01

    Speed and accuracy of identification of pictures of objects are facilitated by prior viewing. Contributions of image features, convex or concave components, and object models in a repetition priming task were explored in 2 studies involving 96 college students. Results provide evidence of intermediate representations in visual object recognition.…

  10. How Category Learning Affects Object Representations: Not All Morphspaces Stretch Alike

    ERIC Educational Resources Information Center

    Folstein, Jonathan R.; Gauthier, Isabel; Palmeri, Thomas J.

    2012-01-01

    How does learning to categorize objects affect how people visually perceive them? Behavioral, neurophysiological, and neuroimaging studies have tested the degree to which category learning influences object representations, with conflicting results. Some studies have found that objects become more visually discriminable along dimensions relevant…

  11. Strength of object representation: its key role in object-based attention for determining the competition result between Gestalt and top-down objects.

    PubMed

    Zhao, Jingjing; Wang, Yonghui; Liu, Donglai; Zhao, Liang; Liu, Peng

    2015-10-01

    It was found in previous studies that two types of objects (rectangles formed according to the Gestalt principle and Chinese words formed in a top-down fashion) can both induce an object-based effect. The aim of the present study was to investigate how the strength of an object representation affects the result of the competition between these two types of objects based on research carried out by Liu, Wang and Zhou [(2011) Acta Psychologica, 138(3), 397-404]. In Experiment 1, the rectangles were filled with two different colors to increase the strength of Gestalt object representation, and we found that the object effect changed significantly for the different stimulus types. Experiment 2 used Chinese words with various familiarities to manipulate the strength of the top-down object representation. As a result, the object-based effect induced by rectangles was observed only when the Chinese word familiarity was low. These results suggest that the strength of object representation determines the result of competition between different types of objects. PMID:26041271

  12. Strength of object representation: its key role in object-based attention for determining the competition result between Gestalt and top-down objects.

    PubMed

    Zhao, Jingjing; Wang, Yonghui; Liu, Donglai; Zhao, Liang; Liu, Peng

    2015-10-01

    It was found in previous studies that two types of objects (rectangles formed according to the Gestalt principle and Chinese words formed in a top-down fashion) can both induce an object-based effect. The aim of the present study was to investigate how the strength of an object representation affects the result of the competition between these two types of objects based on research carried out by Liu, Wang and Zhou [(2011) Acta Psychologica, 138(3), 397-404]. In Experiment 1, the rectangles were filled with two different colors to increase the strength of Gestalt object representation, and we found that the object effect changed significantly for the different stimulus types. Experiment 2 used Chinese words with various familiarities to manipulate the strength of the top-down object representation. As a result, the object-based effect induced by rectangles was observed only when the Chinese word familiarity was low. These results suggest that the strength of object representation determines the result of competition between different types of objects.

  13. Development of visual 3D virtual environment for control software

    NASA Technical Reports Server (NTRS)

    Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence

    1991-01-01

    Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D

  14. A technique for 3-D robot vision for space applications

    NASA Technical Reports Server (NTRS)

    Markandey, V.; Tagare, H.; Defigueiredo, R. J. P.

    1987-01-01

    An extension of the MIAG algorithm for recognition and motion parameter determination of general 3-D polyhedral objects based on model matching techniques and using Moment Invariants as features of object representation is discussed. Results of tests conducted on the algorithm under conditions simulating space conditions are presented.

  15. Demonstration of a 3D vision algorithm for space applications

    NASA Technical Reports Server (NTRS)

    Defigueiredo, Rui J. P. (Editor)

    1987-01-01

    This paper reports an extension of the MIAG algorithm for recognition and motion parameter determination of general 3-D polyhedral objects based on model matching techniques and using movement invariants as features of object representation. Results of tests conducted on the algorithm under conditions simulating space conditions are presented.

  16. 3-D visualisation of palaeoseismic trench stratigraphy and trench logging using terrestrial remote sensing and GPR - combining techniques towards an objective multiparametric interpretation

    NASA Astrophysics Data System (ADS)

    Schneiderwind, S.; Mason, J.; Wiatr, T.; Papanikolaou, I.; Reicherter, K.

    2015-09-01

    Two normal faults on the Island of Crete and mainland Greece were studied to create and test an innovative workflow to make palaeoseismic trench logging more objective, and visualise the sedimentary architecture within the trench wall in 3-D. This is achieved by combining classical palaeoseismic trenching techniques with multispectral approaches. A conventional trench log was firstly compared to results of iso cluster analysis of a true colour photomosaic representing the spectrum of visible light. Passive data collection disadvantages (e.g. illumination) were addressed by complementing the dataset with active near-infrared backscatter signal image from t-LiDAR measurements. The multispectral analysis shows that distinct layers can be identified and it compares well with the conventional trench log. According to this, a distinction of adjacent stratigraphic units was enabled by their particular multispectral composition signature. Based on the trench log, a 3-D-interpretation of GPR data collected on the vertical trench wall was then possible. This is highly beneficial for measuring representative layer thicknesses, displacements and geometries at depth within the trench wall. Thus, misinterpretation due to cutting effects is minimised. Sedimentary feature geometries related to earthquake magnitude can be used to improve the accuracy of seismic hazard assessments. Therefore, this manuscript combines multiparametric approaches and shows: (i) how a 3-D visualisation of palaeoseismic trench stratigraphy and logging can be accomplished by combining t-LiDAR and GRP techniques, and (ii) how a multispectral digital analysis can offer additional advantages and a higher objectivity in the interpretation of palaeoseismic and stratigraphic information. The multispectral datasets are stored allowing unbiased input for future (re-)investigations.

  17. The Time Course of Activation of Object Shape and Shape+Colour Representations during Memory Retrieval

    PubMed Central

    Lloyd-Jones, Toby J.; Roberts, Mark V.; Leek, E. Charles; Fouquet, Nathalie C.; Truchanowicz, Ewa G.

    2012-01-01

    Little is known about the timing of activating memory for objects and their associated perceptual properties, such as colour, and yet this is important for theories of human cognition. We investigated the time course associated with early cognitive processes related to the activation of object shape and object shape+colour representations respectively, during memory retrieval as assessed by repetition priming in an event-related potential (ERP) study. The main findings were as follows: (1) we identified a unique early modulation of mean ERP amplitude during the N1 that was associated with the activation of object shape independently of colour; (2) we also found a subsequent early P2 modulation of mean amplitude over the same electrode clusters associated with the activation of object shape+colour representations; (3) these findings were apparent across both familiar (i.e., correctly coloured – yellow banana) and novel (i.e., incorrectly coloured - blue strawberry) objects; and (4) neither of the modulations of mean ERP amplitude were evident during the P3. Together the findings delineate the timing of object shape and colour memory systems and support the notion that perceptual representations of object shape mediate the retrieval of temporary shape+colour representations for familiar and novel objects. PMID:23155393

  18. Hip2Norm: an object-oriented cross-platform program for 3D analysis of hip joint morphology using 2D pelvic radiographs.

    PubMed

    Zheng, G; Tannast, M; Anderegg, C; Siebenrock, K A; Langlotz, F

    2007-07-01

    We developed an object-oriented cross-platform program to perform three-dimensional (3D) analysis of hip joint morphology using two-dimensional (2D) anteroposterior (AP) pelvic radiographs. Landmarks extracted from 2D AP pelvic radiographs and optionally an additional lateral pelvic X-ray were combined with a cone beam projection model to reconstruct 3D hip joints. Since individual pelvic orientation can vary considerably, a method for standardizing pelvic orientation was implemented to determine the absolute tilt/rotation. The evaluation of anatomically morphologic differences was achieved by reconstructing the projected acetabular rim and the measured hip parameters as if obtained in a standardized neutral orientation. The program had been successfully used to interactively objectify acetabular version in hips with femoro-acetabular impingement or developmental dysplasia. Hip(2)Norm is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway) for graphical user interface (GUI) and is transportable to any platform. PMID:17499878

  19. Hip2Norm: an object-oriented cross-platform program for 3D analysis of hip joint morphology using 2D pelvic radiographs.

    PubMed

    Zheng, G; Tannast, M; Anderegg, C; Siebenrock, K A; Langlotz, F

    2007-07-01

    We developed an object-oriented cross-platform program to perform three-dimensional (3D) analysis of hip joint morphology using two-dimensional (2D) anteroposterior (AP) pelvic radiographs. Landmarks extracted from 2D AP pelvic radiographs and optionally an additional lateral pelvic X-ray were combined with a cone beam projection model to reconstruct 3D hip joints. Since individual pelvic orientation can vary considerably, a method for standardizing pelvic orientation was implemented to determine the absolute tilt/rotation. The evaluation of anatomically morphologic differences was achieved by reconstructing the projected acetabular rim and the measured hip parameters as if obtained in a standardized neutral orientation. The program had been successfully used to interactively objectify acetabular version in hips with femoro-acetabular impingement or developmental dysplasia. Hip(2)Norm is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway) for graphical user interface (GUI) and is transportable to any platform.

  20. Visual representation of malleable and rigid objects that deform as they rotate.

    PubMed

    Kourtzi, Z; Shiffrar, M

    2001-04-01

    Most studies and theories of object recognition have addressed the perception of rigid objects. Yet, physical objects may also move in a nonrigid manner. A series of priming studies examined the conditions under which observers can recognize novel views of objects moving nonrigidly. Observers were primed with 2 views of a rotating object that were linked by apparent motion or presented statically. The apparent malleability of the rotating prime object varied such that the object appeared to be either malleable or rigid. Novel deformed views of malleable objects were primed when falling within the object's motion path. Priming patterns were significantly more restricted for deformed views of rigid objects. These results suggest that moving malleable objects may be represented as continuous events, whereas rigid objects may not. That is, object representations may be "dynamically remapped" during the analysis of the object's motion.

  1. On the Dynamics of Action Representations Evoked by Names of Manipulable Objects

    ERIC Educational Resources Information Center

    Bub, Daniel N.; Masson, Michael E. J.

    2012-01-01

    Two classes of hand action representations are shown to be activated by listening to the name of a manipulable object (e.g., cellphone). The functional action associated with the proper use of an object is evoked soon after the onset of its name, as indicated by primed execution of that action. Priming is sustained throughout the duration of the…

  2. The Verbal Nature of Representations of the Canonical Colors of Objects

    ERIC Educational Resources Information Center

    Gleason, Tracy R.; Fiske, Kate E.; Chan, Ruth K.

    2004-01-01

    In selecting the canonical colors of color-specific objects, children may use verbal mediation, a cognitive process whereby an object and its color are matched using verbal rather than pictorial representation [British Journal of Developmental Psychology 14 (1996) 339]. To investigate this process, 108 2- to 5-year-old children were asked to…

  3. Qualitative Differences in the Representation of Spatial Relations for Different Object Classes

    ERIC Educational Resources Information Center

    Cooper, Eric E.; Brooks, Brian E.

    2004-01-01

    Two experiments investigated whether the representations used for animal, produce, and object recognition code spatial relations in a similar manner. Experiment 1 tested the effects of planar rotation on the recognition of animals and nonanimal objects. Response times for recognizing animals followed an inverted U-shaped function, whereas those…

  4. Mirror-Image Confusions: Implications for Representation and Processing of Object Orientation

    ERIC Educational Resources Information Center

    Gregory, Emma; McCloskey, Michael

    2010-01-01

    Perceiving the orientation of objects is important for interacting with the world, yet little is known about the mental representation or processing of object orientation information. The tendency of humans and other species to confuse mirror images provides a potential clue. However, the appropriate characterization of this phenomenon is not…

  5. Object-Oriented Echo Perception and Cortical Representation in Echolocating Bats

    PubMed Central

    Grunwald, Jan E; Schuller, Gerd; Wiegrebe, Lutz

    2007-01-01

    Echolocating bats can identify three-dimensional objects exclusively through the analysis of acoustic echoes of their ultrasonic emissions. However, objects of the same structure can differ in size, and the auditory system must achieve a size-invariant, normalized object representation for reliable object recognition. This study describes both the behavioral classification and the cortical neural representation of echoes of complex virtual objects that vary in object size. In a phantom-target playback experiment, it is shown that the bat Phyllostomus discolor spontaneously classifies most scaled versions of objects according to trained standards. This psychophysical performance is reflected in the electrophysiological responses of a population of cortical units that showed an object-size invariant response (14/109 units, 13%). These units respond preferentially to echoes from objects in which echo duration (encoding object depth) and echo amplitude (encoding object surface area) co-varies in a meaningful manner. These results indicate that at the level of the bat's auditory cortex, an object-oriented rather than a stimulus-parameter–oriented representation of echoes is achieved. PMID:17425407

  6. Top-down attention based on object representation and incremental memory for knowledge building and inference.

    PubMed

    Kim, Bumhwi; Ban, Sang-Woo; Lee, Minho

    2013-10-01

    Humans can efficiently perceive arbitrary visual objects based on an incremental learning mechanism with selective attention. This paper proposes a new task specific top-down attention model to locate a target object based on its form and color representation along with a bottom-up saliency based on relativity of primitive visual features and some memory modules. In the proposed model top-down bias signals corresponding to the target form and color features are generated, which draw the preferential attention to the desired object by the proposed selective attention model in concomitance with the bottom-up saliency process. The object form and color representation and memory modules have an incremental learning mechanism together with a proper object feature representation scheme. The proposed model includes a Growing Fuzzy Topology Adaptive Resonance Theory (GFTART) network which plays two important roles in object color and form biased attention; one is to incrementally learn and memorize color and form features of various objects, and the other is to generate a top-down bias signal to localize a target object by focusing on the candidate local areas. Moreover, the GFTART network can be utilized for knowledge inference which enables the perception of new unknown objects on the basis of the object form and color features stored in the memory during training. Experimental results show that the proposed model is successful in focusing on the specified target objects, in addition to the incremental representation and memorization of various objects in natural scenes. In addition, the proposed model properly infers new unknown objects based on the form and color features of previously trained objects. PMID:23624577

  7. The effect of spatial competition between object-level representations of target and mask on object substitution masking.

    PubMed

    Guest, Duncan; Gellatly, Angus; Pilling, Michael

    2011-11-01

    One of the processes determining object substitution masking (OSM) is thought to be the spatial competition between independent object file representations of the target and mask (e.g., Kahan & Lichtman, 2006). In a series of experiments, we further examined how OSM is influenced by this spatial competition by manipulating the overlap between the surfaces created by the modal completion of the target (an outline square with a gap in one of its sides) and the mask (a four-dot mask). The results of these experiments demonstrate that increasing the spatial overlap between the surfaces of the target and mask increases OSM. Importantly, this effect is not caused by the mask interfering with the processing of the target features it overlaps. Overall, the data indicate, consistent with Kahan and Lichtman, that OSM can arise through competition between independent target and mask representations.

  8. Method for the determination of the modulation transfer function (MTF) in 3D x-ray imaging systems with focus on correction for finite extent of test objects

    NASA Astrophysics Data System (ADS)

    Schäfer, Dirk; Wiegert, Jens; Bertram, Matthias

    2007-03-01

    It is well known that rotational C-arm systems are capable of providing 3D tomographic X-ray images with much higher spatial resolution than conventional CT systems. Using flat X-ray detectors, the pixel size of the detector typically is in the range of the size of the test objects. Therefore, the finite extent of the "point" source cannot be neglected for the determination of the MTF. A practical algorithm has been developed that includes bias estimation and subtraction, averaging in the spatial domain, and correction for the frequency content of the imaged bead or wire. Using this algorithm, the wire and the bead method are analyzed for flat detector based 3D X-ray systems with the use of standard CT performance phantoms. Results on both experimental and simulated data are presented. It is found that the approximation of applying the analysis of the wire method to a bead measurement is justified within 3% accuracy up to the first zero of the MTF.

  9. A Modified Exoskeleton for 3D Shape Description and Recognition

    NASA Astrophysics Data System (ADS)

    Lipikorn, Rajalida; Shimizu, Akinobu; Hagihara, Yoshihiro; Kobatake, Hidefumi

    Three-dimensional(3D) shape representation is a powerful tool in object recognition that is an essential process in an image processing and analysis system. Skeleton is one of the most widely used representations for object recognition, nevertheless most of the skeletons obtained from conventional methods are susceptible to rotation and noise disturbances. In this paper, we present a new 3D object representation called a modified exoskeleton (mES) which preserves skeleton properties including significant characteristics about an object that are meaningful for object recognition, and is more stable and less susceptible to rotation and noise than the skeletons. Then a 3D shape recognition methodology which determines the similarity between an observed object and other known objects in a database is introduced. Through a number of experiments on 3D artificial objects and real volumetric lung tumors extracted from CT images, it can be verified that our proposed methodology based on the mES is a simple yet efficient method that is less sensitive to rotation, noise, and independent of orientation and size of the objects.

  10. The Representation of Object-Directed Action and Function Knowledge in the Human Brain.

    PubMed

    Chen, Quanjing; Garcea, Frank E; Mahon, Bradford Z

    2016-04-01

    The appropriate use of everyday objects requires the integration of action and function knowledge. Previous research suggests that action knowledge is represented in frontoparietal areas while function knowledge is represented in temporal lobe regions. Here we used multivoxel pattern analysis to investigate the representation of object-directed action and function knowledge while participants executed pantomimes of familiar tool actions. A novel approach for decoding object knowledge was used in which classifiers were trained on one pair of objects and then tested on a distinct pair; this permitted a measurement of classification accuracy over and above object-specific information. Region of interest (ROI) analyses showed that object-directed actions could be decoded in tool-preferring regions of both parietal and temporal cortex, while no independently defined tool-preferring ROI showed successful decoding of object function. However, a whole-brain searchlight analysis revealed that while frontoparietal motor and peri-motor regions are engaged in the representation of object-directed actions, medial temporal lobe areas in the left hemisphere are involved in the representation of function knowledge. These results indicate that both action and function knowledge are represented in a topographically coherent manner that is amenable to study with multivariate approaches, and that the left medial temporal cortex represents knowledge of object function. PMID:25595179

  11. Spatiotemporal Form Integration: Sequentially presented inducers can lead to representations of stationary and rigidly rotating objects

    PubMed Central

    McCarthy, J. Daniel; Strother, Lars; Caplovitz, Gideon Paul

    2016-01-01

    Objects in the world are often occluded and in motion. The visible fragments of such objects are revealed at different times and locations in space. To form coherent representations of the surfaces of these objects, the visual system must integrate local form information over space and time. We introduce a new illusion in which a rigidly rotating square is perceived on the basis of sequentially presented Pacman inducers. The illusion highlights two fundamental processes that allow us to perceive objects whose form features are revealed over time: Spatiotemporal Form Integration (STFI) and Position Updating. STFI refers to the spatial integration of persistent representations of local form features across time. Position updating of these persistent form representations allows them to be integrated into a rigid global motion percept. We describe three psychophysical experiments designed to identify spatial and temporal constraints that underlie these two processes and a fourth experiment that extends these findings to more ecologically valid stimuli. Our results indicate that although STFI can occur across relatively long delays between successive inducers (i.e., greater than 500 ms), position updating is limited to a more restricted temporal window (i.e., ~300 ms or less) and to a confined range of spatial (mis)alignment. These findings lend insight into the limits of mechanisms underlying the visual system's capacity to integrate transient, piecemeal form information and support coherent object representations in the ever-changing environment. PMID:26269386

  12. A rudimentary database for three-dimensional objects using structural representation

    NASA Technical Reports Server (NTRS)

    Sowers, James P.

    1987-01-01

    A database which enables users to store and share the description of three-dimensional objects in a research environment is presented. The main objective of the design is to make it a compact structure that holds sufficient information to reconstruct the object. The database design is based on an object representation scheme which is information preserving, reasonably efficient, and yet economical in terms of the storage requirement. The determination of the needed data for the reconstruction process is guided by the belief that it is faster to do simple computations to generate needed data/information for construction than to retrieve everything from memory. Some recent techniques of three-dimensional representation that influenced the design of the database are discussed. The schema for the database and the structural definition used to define an object are given. The user manual for the software developed to create and maintain the contents of the database is included.

  13. Disentangling Representations of Object Shape and Object Category in Human Visual Cortex: The Animate-Inanimate Distinction.

    PubMed

    Proklova, Daria; Kaiser, Daniel; Peelen, Marius V

    2016-05-01

    Objects belonging to different categories evoke reliably different fMRI activity patterns in human occipitotemporal cortex, with the most prominent distinction being that between animate and inanimate objects. An unresolved question is whether these categorical distinctions reflect category-associated visual properties of objects or whether they genuinely reflect object category. Here, we addressed this question by measuring fMRI responses to animate and inanimate objects that were closely matched for shape and low-level visual features. Univariate contrasts revealed animate- and inanimate-preferring regions in ventral and lateral temporal cortex even for individually matched object pairs (e.g., snake-rope). Using representational similarity analysis, we mapped out brain regions in which the pairwise dissimilarity of multivoxel activity patterns (neural dissimilarity) was predicted by the objects' pairwise visual dissimilarity and/or their categorical dissimilarity. Visual dissimilarity was measured as the time it took participants to find a unique target among identical distractors in three visual search experiments, where we separately quantified overall dissimilarity, outline dissimilarity, and texture dissimilarity. All three visual dissimilarity structures predicted neural dissimilarity in regions of visual cortex. Interestingly, these analyses revealed several clusters in which categorical dissimilarity predicted neural dissimilarity after regressing out visual dissimilarity. Together, these results suggest that the animate-inanimate organization of human visual cortex is not fully explained by differences in the characteristic shape or texture properties of animals and inanimate objects. Instead, representations of visual object properties and object category may coexist in more anterior parts of the visual system.

  14. Nonvisual and visual object shape representations in occipitotemporal cortex: evidence from congenitally blind and sighted adults.

    PubMed

    Peelen, Marius V; He, Chenxi; Han, Zaizhu; Caramazza, Alfonso; Bi, Yanchao

    2014-01-01

    Knowledge of object shape is primarily acquired through the visual modality but can also be acquired through other sensory modalities. In the present study, we investigated the representation of object shape in humans without visual experience. Congenitally blind and sighted participants rated the shape similarity of pairs of 33 familiar objects, referred to by their names. The resulting shape similarity matrices were highly similar for the two groups, indicating that knowledge of the objects' shapes was largely independent of visual experience. Using fMRI, we tested for brain regions that represented object shape knowledge in blind and sighted participants. Multivoxel activity patterns were established for each of the 33 aurally presented object names. Sighted participants additionally viewed pictures of these objects. Using representational similarity analysis, neural similarity matrices were related to the behavioral shape similarity matrices. Results showed that activity patterns in occipitotemporal cortex (OTC) regions, including inferior temporal (IT) cortex and functionally defined object-selective cortex (OSC), reflected the behavioral shape similarity ratings in both blind and sighted groups, also when controlling for the objects' tactile and semantic similarity. Furthermore, neural similarity matrices of IT and OSC showed similarities across blind and sighted groups (within the auditory modality) and across modality (within the sighted group), but not across both modality and group (blind auditory-sighted visual). Together, these findings provide evidence that OTC not only represents objects visually (requiring visual experience) but also represents objects nonvisually, reflecting knowledge of object shape independently of the modality through which this knowledge was acquired.

  15. The Game Object Model and Expansive Learning: Creation, Instantiation, Expansion, and Re-representation

    ERIC Educational Resources Information Center

    Amory, Alan; Molomo, Bolepo; Blignaut, Seugnet

    2011-01-01

    In this paper, the collaborative development, instantiation, expansion and re-representation as research instrument of the Game Object Model (GOM) are explored from a Cultural Historical Activity Theory perspective. The aim of the paper is to develop insights into the design, integration, evaluation and use of video games in learning and teaching.…

  16. The Nature of Experience Determines Object Representations in the Visual System

    ERIC Educational Resources Information Center

    Wong, Yetta K.; Folstein, Jonathan R.; Gauthier, Isabel

    2012-01-01

    Visual perceptual learning (PL) and perceptual expertise (PE) traditionally lead to different training effects and recruit different brain areas, but reasons for these differences are largely unknown. Here, we tested how the learning history influences visual object representations. Two groups were trained with tasks typically used in PL or PE…

  17. Object shape classification and scene shape representation for three-dimensional laser scanned outdoor data

    NASA Astrophysics Data System (ADS)

    Ning, Xiaojuan; Wang, Yinghui; Zhang, Xiaopeng

    2013-02-01

    Shape analysis of a three-dimensional (3-D) scene is an important issue and could be widely used for various applications: city planning, robot navigation, virtual tourism, etc. We introduce an approach for understanding the primitive shape of the scene to reveal the semantic scene shape structure and represent the scene using shape elements. The scene objects are labeled and recognized using the geometric and semantic features for each cluster, which is based on the knowledge of scene. Furthermore, the object in scene with a different primitive shape could also be classified and fitted using the Gaussian map of the segmented scene. We demonstrate the presented approach on several complex scenes from laser scanning. According to the experimental result, the proposed method can accurately represent the geometric structure of the 3-D scene.

  18. Distributed Representation of Visual Objects by Single Neurons in the Human Brain

    PubMed Central

    Valdez, André B.; Papesh, Megan H.; Treiman, David M.; Smith, Kris A.; Goldinger, Stephen D.

    2015-01-01

    It remains unclear how single neurons in the human brain represent whole-object visual stimuli. While recordings in both human and nonhuman primates have shown distributed representations of objects (many neurons encoding multiple objects), recordings of single neurons in the human medial temporal lobe, taken as subjects' discriminated objects during multiple presentations, have shown gnostic representations (single neurons encoding one object). Because some studies suggest that repeated viewing may enhance neural selectivity for objects, we had human subjects discriminate objects in a single, more naturalistic viewing session. We found that, across 432 well isolated neurons recorded in the hippocampus and amygdala, the average fraction of objects encoded was 26%. We also found that more neurons encoded several objects versus only one object in the hippocampus (28 vs 18%, p < 0.001) and in the amygdala (30 vs 19%, p < 0.001). Thus, during realistic viewing experiences, typical neurons in the human medial temporal lobe code for a considerable range of objects, across multiple semantic categories. PMID:25834044

  19. Gravity influences the visual representation of object tilt in parietal cortex.

    PubMed

    Rosenberg, Ari; Angelaki, Dora E

    2014-10-22

    Sensory systems encode the environment in egocentric (e.g., eye, head, or body) reference frames, creating inherently unstable representations that shift and rotate as we move. However, it is widely speculated that the brain transforms these signals into an allocentric, gravity-centered representation of the world that is stable and independent of the observer's spatial pose. Where and how this representation may be achieved is currently unknown. Here we demonstrate that a subpopulation of neurons in the macaque caudal intraparietal area (CIP) visually encodes object tilt in nonegocentric coordinates defined relative to the gravitational vector. Neuronal responses to the tilt of a visually presented planar surface were measured with the monkey in different spatial orientations (upright and rolled left/right ear down) and then compared. This revealed a continuum of representations in which planar tilt was encoded in a gravity-centered reference frame in approximately one-tenth of the comparisons, intermediate reference frames ranging between gravity-centered and egocentric in approximately two-tenths of the comparisons, and in an egocentric reference frame in less than half of the comparisons. Altogether, almost half of the comparisons revealed a shift in the preferred tilt and/or a gain change consistent with encoding object orientation in nonegocentric coordinates. Through neural network modeling, we further show that a purely gravity-centered representation of object tilt can be achieved directly from the population activity of CIP-like units. These results suggest that area CIP may play a key role in creating a stable, allocentric representation of the environment defined relative to an "earth-vertical" direction.

  20. Information Object Definition–based Unified Modeling Language Representation of DICOM Structured Reporting

    PubMed Central

    Tirado-Ramos, Alfredo; Hu, Jingkun; Lee, K.P.

    2002-01-01

    Supplement 23 to DICOM (Digital Imaging and Communications for Medicine), Structured Reporting, is a specification that supports a semantically rich representation of image and waveform content, enabling experts to share image and related patient information. DICOM SR supports the representation of textual and coded data linked to images and waveforms. Nevertheless, the medical information technology community needs models that work as bridges between the DICOM relational model and open object-oriented technologies. The authors assert that representations of the DICOM Structured Reporting standard, using object-oriented modeling languages such as the Unified Modeling Language, can provide a high-level reference view of the semantically rich framework of DICOM and its complex structures. They have produced an object-oriented model to represent the DICOM SR standard and have derived XML-exchangeable representations of this model using World Wide Web Consortium specifications. They expect the model to benefit developers and system architects who are interested in developing applications that are compliant with the DICOM SR specification. PMID:11751804

  1. A Representational Similarity Analysis of the Dynamics of Object Processing Using Single-Trial EEG Classification

    PubMed Central

    Kaneshiro, Blair; Perreau Guimaraes, Marcos; Kim, Hyung-Suk; Norcia, Anthony M.

    2015-01-01

    The recognition of object categories is effortlessly accomplished in everyday life, yet its neural underpinnings remain not fully understood. In this electroencephalography (EEG) study, we used single-trial classification to perform a Representational Similarity Analysis (RSA) of categorical representation of objects in human visual cortex. Brain responses were recorded while participants viewed a set of 72 photographs of objects with a planned category structure. The Representational Dissimilarity Matrix (RDM) used for RSA was derived from confusions of a linear classifier operating on single EEG trials. In contrast to past studies, which used pairwise correlation or classification to derive the RDM, we used confusion matrices from multi-class classifications, which provided novel self-similarity measures that were used to derive the overall size of the representational space. We additionally performed classifications on subsets of the brain response in order to identify spatial and temporal EEG components that best discriminated object categories and exemplars. Results from category-level classifications revealed that brain responses to images of human faces formed the most distinct category, while responses to images from the two inanimate categories formed a single category cluster. Exemplar-level classifications produced a broadly similar category structure, as well as sub-clusters corresponding to natural language categories. Spatiotemporal components of the brain response that differentiated exemplars within a category were found to differ from those implicated in differentiating between categories. Our results show that a classification approach can be successfully applied to single-trial scalp-recorded EEG to recover fine-grained object category structure, as well as to identify interpretable spatiotemporal components underlying object processing. Finally, object category can be decoded from purely temporal information recorded at single electrodes. PMID

  2. A Representational Similarity Analysis of the Dynamics of Object Processing Using Single-Trial EEG Classification.

    PubMed

    Kaneshiro, Blair; Perreau Guimaraes, Marcos; Kim, Hyung-Suk; Norcia, Anthony M; Suppes, Patrick

    2015-01-01

    The recognition of object categories is effortlessly accomplished in everyday life, yet its neural underpinnings remain not fully understood. In this electroencephalography (EEG) study, we used single-trial classification to perform a Representational Similarity Analysis (RSA) of categorical representation of objects in human visual cortex. Brain responses were recorded while participants viewed a set of 72 photographs of objects with a planned category structure. The Representational Dissimilarity Matrix (RDM) used for RSA was derived from confusions of a linear classifier operating on single EEG trials. In contrast to past studies, which used pairwise correlation or classification to derive the RDM, we used confusion matrices from multi-class classifications, which provided novel self-similarity measures that were used to derive the overall size of the representational space. We additionally performed classifications on subsets of the brain response in order to identify spatial and temporal EEG components that best discriminated object categories and exemplars. Results from category-level classifications revealed that brain responses to images of human faces formed the most distinct category, while responses to images from the two inanimate categories formed a single category cluster. Exemplar-level classifications produced a broadly similar category structure, as well as sub-clusters corresponding to natural language categories. Spatiotemporal components of the brain response that differentiated exemplars within a category were found to differ from those implicated in differentiating between categories. Our results show that a classification approach can be successfully applied to single-trial scalp-recorded EEG to recover fine-grained object category structure, as well as to identify interpretable spatiotemporal components underlying object processing. Finally, object category can be decoded from purely temporal information recorded at single electrodes. PMID

  3. System for conversion between the boundary representation model and a constructive solid geometry model of an object

    DOEpatents

    Christensen, Noel C.; Emery, James D.; Smith, Maurice L.

    1988-04-05

    A system converts from the boundary representation of an object to the constructive solid geometry representation thereof. The system converts the boundary representation of the object into elemental atomic geometrical units or I-bodies which are in the shape of stock primitives or regularized intersections of stock primitives. These elemental atomic geometrical units are then represented in symbolic form. The symbolic representations of the elemental atomic geometrical units are then assembled heuristically to form a constructive solid geometry representation of the object usable for manufacturing thereof. Artificial intelligence is used to determine the best constructive solid geometry representation from the boundary representation of the object. Heuristic criteria are adapted to the manufacturing environment for which the device is to be utilized. The surface finish, tolerance, and other information associated with each surface of the boundary representation of the object are mapped onto the constructive solid geometry representation of the object to produce an enhanced solid geometry representation, particularly useful for computer-aided manufacture of the object.

  4. System for conversion between the boundary representation model and a constructive solid geometry model of an object

    DOEpatents

    Christensen, N.C.; Emery, J.D.; Smith, M.L.

    1985-04-29

    A system converts from the boundary representation of an object to the constructive solid geometry representation thereof. The system converts the boundary representation of the object into elemental atomic geometrical units or I-bodies which are in the shape of stock primitives or regularized intersections of stock primitives. These elemental atomic geometrical units are then represented in symbolic form. The symbolic representations of the elemental atomic geometrical units are then assembled heuristically to form a constructive solid geometry representation of the object usable for manufacturing thereof. Artificial intelligence is used to determine the best constructive solid geometry representation from the boundary representation of the object. Heuristic criteria are adapted to the manufacturing environment for which the device is to be utilized. The surface finish, tolerance, and other information associated with each surface of the boundary representation of the object are mapped onto the constructive solid geometry representation of the object to produce an enhanced solid geometry representation, particularly useful for computer-aided manufacture of the object. 19 figs.

  5. 3D Geo: An Alternative Approach

    NASA Astrophysics Data System (ADS)

    Georgopoulos, A.

    2016-10-01

    The expression GEO is mostly used to denote relation to the earth. However it should not be confined to what is related to the earth's surface, as other objects also need three dimensional representation and documentation, like cultural heritage objects. They include both tangible and intangible ones. In this paper the 3D data acquisition and 3D modelling of cultural heritage assets are briefly described and their significance is also highlighted. Moreover the organization of such information, related to monuments and artefacts, into relational data bases and its use for various purposes, other than just geometric documentation is also described and presented. In order to help the reader understand the above, several characteristic examples are presented and their methodology explained and their results evaluated.

  6. Characterizing the information content of a newly hatched chick's first visual object representation.

    PubMed

    Wood, Justin N

    2015-03-01

    How does object recognition emerge in the newborn brain? To address this question, I examined the information content of the first visual object representation built by newly hatched chicks (Gallus gallus). In their first week of life, chicks were raised in controlled-rearing chambers that contained a single virtual object rotating around a single axis. In their second week of life, I tested whether subjects had encoded information about the identity and viewpoint of the virtual object. The results showed that chicks built object representations that contained both object identity information and view-specific information. However, there was a trade-off between these two types of information: subjects who were more sensitive to identity information were less sensitive to view-specific information, and vice versa. This pattern of results is predicted by iterative, hierarchically organized visual processing machinery, the machinery that supports object recognition in adult primates. More generally, this study shows that invariant object recognition is a core cognitive ability that can be operational at the onset of visual object experience.

  7. 3D Model Generation From the Engineering Drawing

    NASA Astrophysics Data System (ADS)

    Vaský, Jozef; Eliáš, Michal; Bezák, Pavol; Červeňanská, Zuzana; Izakovič, Ladislav

    2010-01-01

    The contribution deals with the transformation of engineering drawings in a paper form into a 3D computer representation. A 3D computer model can be further processed in CAD/CAM system, it can be modified, archived, and a technical drawing can be then generated from it as well. The transformation process from paper form to the data one is a complex and difficult one, particularly owing to the different types of drawings, forms of displayed objects and encountered errors and deviations from technical standards. The algorithm for 3D model generating from an orthogonal vector input representing a simplified technical drawing of the rotational part is described in this contribution. The algorithm was experimentally implemented as ObjectARX application in the AutoCAD system and the test sample as the representation of the rotational part was used for verificaton.

  8. Neural signatures for sustaining object representations attributed to others in preverbal human infants

    PubMed Central

    Kampis, Dora; Parise, Eugenio; Csibra, Gergely; Kovács, Ágnes Melinda

    2015-01-01

    A major feat of social beings is to encode what their conspecifics see, know or believe. While various non-human animals show precursors of these abilities, humans perform uniquely sophisticated inferences about other people's mental states. However, it is still unclear how these possibly human-specific capacities develop and whether preverbal infants, similarly to adults, form representations of other agents' mental states, specifically metarepresentations. We explored the neurocognitive bases of eight-month-olds' ability to encode the world from another person's perspective, using gamma-band electroencephalographic activity over the temporal lobes, an established neural signature for sustained object representation after occlusion. We observed such gamma-band activity when an object was occluded from the infants' perspective, as well as when it was occluded only from the other person (study 1), and also when subsequently the object disappeared, but the person falsely believed the object to be present (study 2). These findings suggest that the cognitive systems involved in representing the world from infants' own perspective are also recruited for encoding others' beliefs. Such results point to an early-developing, powerful apparatus suitable to deal with multiple concurrent representations, and suggest that infants can have a metarepresentational understanding of other minds even before the onset of language. PMID:26559949

  9. Neural signatures for sustaining object representations attributed to others in preverbal human infants.

    PubMed

    Kampis, Dora; Parise, Eugenio; Csibra, Gergely; Kovács, Ágnes Melinda

    2015-11-22

    A major feat of social beings is to encode what their conspecifics see, know or believe. While various non-human animals show precursors of these abilities, humans perform uniquely sophisticated inferences about other people's mental states. However, it is still unclear how these possibly human-specific capacities develop and whether preverbal infants, similarly to adults, form representations of other agents' mental states, specifically metarepresentations. We explored the neurocognitive bases of eight-month-olds' ability to encode the world from another person's perspective, using gamma-band electroencephalographic activity over the temporal lobes, an established neural signature for sustained object representation after occlusion. We observed such gamma-band activity when an object was occluded from the infants' perspective, as well as when it was occluded only from the other person (study 1), and also when subsequently the object disappeared, but the person falsely believed the object to be present (study 2). These findings suggest that the cognitive systems involved in representing the world from infants' own perspective are also recruited for encoding others' beliefs. Such results point to an early-developing, powerful apparatus suitable to deal with multiple concurrent representations, and suggest that infants can have a metarepresentational understanding of other minds even before the onset of language. PMID:26559949

  10. Improvement and characterization of the adhesion of electrospun PLDLA nanofibers on PLDLA-based 3D object substrates for orthopedic application.

    PubMed

    Wimpenny, I; Lahteenkorva, K; Suokas, E; Ashammakhi, N; Yang, Y

    2012-01-01

    Intensive research has demonstrated the clear biological potential of electrospun nanofibers for tissue regeneration and repair. However, nanofibers alone have limited mechanical properties. In this study we took poly(L-lactide-co-D-lactide) (PLDLA)-based 3D objects, one existing medical device (interference screws) and one medical device model (discs) as examples to form composites through coating their surface with electrospun PLDLA nanofibers. We specifically investigated the effects of electrospinning parameters on the improvement of adhesion of the electrospun nanofibers to the PLDLA-based substrates. To reveal the adhesion mechanisms, a novel peel test protocol was developed for the characterization of the adhesion and delamination phenomenon of the nanofibers deposited to substrates. The effect of incubation of the composites under physiological conditions on the adhesion of the nanofibers has also been studied. It was revealed that reduction of the working distance to 10 cm resulted in deposition of residual solvent during electrospinning of nanofibers onto the substrate, causing fiber-fiber bonding. Delamination of this coating occurred between the whole nanofiber layer and substrate, at low stress. Fibers deposited at 15 cm working distance were of smaller diameter and no residual solvent was observed during deposition. Delamination occurred between nanofiber layers, which peeled off under greater stress. This study represents a novel method for the alteration of nanofiber adhesion to substrates, and quantification of the change in the adhesion state, which has potential applications to develop better medical devices for orthopedic tissue repair and regeneration. PMID:21943952

  11. Computational modeling of the neural representation of object shape in the primate ventral visual system

    PubMed Central

    Eguchi, Akihiro; Mender, Bedeho M. W.; Evans, Benjamin D.; Humphreys, Glyn W.; Stringer, Simon M.

    2015-01-01

    Neurons in successive stages of the primate ventral visual pathway encode the spatial structure of visual objects. In this paper, we investigate through computer simulation how these cell firing properties may develop through unsupervised visually-guided learning. Individual neurons in the model are shown to exploit statistical regularity and temporal continuity of the visual inputs during training to learn firing properties that are similar to neurons in V4 and TEO. Neurons in V4 encode the conformation of boundary contour elements at a particular position within an object regardless of the location of the object on the retina, while neurons in TEO integrate information from multiple boundary contour elements. This representation goes beyond mere object recognition, in which neurons simply respond to the presence of a whole object, but provides an essential foundation from which the brain is subsequently able to recognize the whole object. PMID:26300766

  12. 3D representation of geochemical data, the corresponding alteration and associated REE mobility at the Ranger uranium deposit, Northern Territory, Australia

    NASA Astrophysics Data System (ADS)

    Fisher, Louise A.; Cleverley, James S.; Pownceby, Mark; MacRae, Colin

    2013-12-01

    Interrogation and 3D visualisation of multiple multi-element data sets collected at the Ranger 1 No. 3 uranium mine, in the Northern Territory of Australia, show a distinct and large-scale chemical zonation around the ore body. A central zone of Mg alteration, dominated by extensive clinochlore alteration, overprints a biotite-muscovite-K-feldspar assemblage which shows increasing loss of Na, Ba and Ca moving towards the ore body. Manipulation of pre-existing geochemical data and integration of new data collected from targeted `niche' samples make it possible to recognise chemical architecture within the system and identify potential fluid conduits. New trace element and rare earth element (REE) data show strong fractionation associated with the zoned alteration around the deposit and with fault planes that intersect and bound the deposit. Within the most altered portion of the system, isocon analysis indicates addition of elements including Mg, S, Cu, Au and Ni and removal of elements including Ca, K, Ba and Na within a zone of damage associated with ore precipitation. In the more distal parts of the system, processes of alteration and replacement associated with the mineralising system can be recognised. REE element data show enrichment in HREE centred about a characteristic peak in Dy in the high-grade ore zone while LREEs are enriched in the outermost portions of the system. The patterns recognised in 3D in zoning of geochemical groups and contoured S, K and Mg abundance and the observed REE patterns suggest a fluid flow regime in which fluids were predominately migrating upwards during ore deposition within the core of the ore system.

  13. Evidence for spatial representation of object shape by echolocating bats (Eptesicus fuscus)

    PubMed Central

    DeLong, Caroline M.; Bragg, Rebecca; Simmons, James A.

    2008-01-01

    Big brown bats were trained in a two-choice task to locate a two-cylinder dipole object with a constant 5 cm spacing in the presence of either a one-cylinder monopole or another two-cylinder dipole with a shorter spacing. For the dipole versus monopole task, the objects were either stationary or in motion during each trial. The dipole and monopole objects varied from trial to trial in the left-right position while also roving in range (10–40 cm), cross range separation (15–40 cm), and dipole aspect angle (0°–90°). These manipulations prevented any single feature of the acoustic stimuli from being a stable indicator of which object was the correct choice. After accounting for effects of masking between echoes from pairs of cylinders at similar distances, the bats discriminated the 5 cm dipole from both the monopole and dipole alternatives with performance independent of aspect angle, implying a distal, spatial object representation rather than a proximal, acoustic object representation. PMID:18537406

  14. The Neural Representation of 3-Dimensional Objects in Rodent Memory Circuits

    PubMed Central

    Burke, Sara N.; Barnes, Carol A.

    2014-01-01

    Three-dimensional objects are common stimuli that rodents and other animals encounter in the natural world that contribute to the associations that are the hallmark of explicit memory. Thus, the use of 3-dimensional objects for investigating the circuits that support associative and episodic memories has a long history. In rodents, the neural representation of these types of stimuli is a polymodal process and lesion data suggest that the perirhinal cortex, an area of the medial temporal lobe that receives afferent input from all sensory modalities, is particularly important for integrating sensory information across modalities to support object recognition. Not surprisingly, recent data from in vivo electrophysiological recordings have shown that principal cells within the perirhinal cortex are activated at locations of an environment that contain 3-dimensional objects. Interestingly, it appears that neural activity patterns related to object stimuli are ubiquitous across memory circuits and have now been observed in many medial temporal lobe structures as well as in the anterior cingulate cortex. This review summarizes behavioral and neurophysiological data that examine the representation of 3-dimensional objects across brain regions that are involved in memory. PMID:25205370

  15. Using the reassignment procedure to test object representation in pigeons and people.

    PubMed

    Peissig, Jessie J; Nagasaka, Yasuo; Young, Michael E; Wasserman, Edward A; Biederman, Irving

    2015-06-01

    In four experiments, we evaluated Lea's (1984) reassignment procedure for studying object representation in pigeons (Experiments 1-3) and humans (Experiment 4). In the initial phase of Experiment 1, pigeons were taught to make discriminative button responses to five views of each of four objects. Using the same set of buttons in the second phase, one view of each object was trained to a different button. In the final phase, the four views that had been withheld in the second stage were shown. In Experiment 2, pigeons were initially trained just like the birds in Experiment 1. Then, one view of each object was reassigned to a different button, now using a new set of four response buttons. In Experiment 3, the reassignment paradigm was again tested using the number of pecks to bind together different views of the same object. Across all three experiments, pigeons showed statistically significant generalization of the new response to the non-reassigned views, but such responding was well below that to the reassigned view. In Experiment 4, human participants were studied using the same stimuli and task as the pigeons in Experiment 1. People did strongly generalize the new response to the non-reassigned views. These results indicate that humans, but not pigeons, can employ a unified object representation that they can flexibly map to different responses under the reassignment procedure. PMID:25762428

  16. The neural representation of 3-dimensional objects in rodent memory circuits.

    PubMed

    Burke, Sara N; Barnes, Carol A

    2015-05-15

    Three-dimensional objects are common stimuli that rodents and other animals encounter in the natural world that contribute to the associations that are the hallmark of explicit memory. Thus, the use of 3-dimensional objects for investigating the circuits that support associative and episodic memories has a long history. In rodents, the neural representation of these types of stimuli is a polymodal process and lesion data suggest that the perirhinal cortex, an area of the medial temporal lobe that receives afferent input from all sensory modalities, is particularly important for integrating sensory information across modalities to support object recognition. Not surprisingly, recent data from in vivo electrophysiological recordings have shown that principal cells within the perirhinal cortex are activated at locations of an environment that contain 3-dimensional objects. Interestingly, it appears that neural activity patterns related to object stimuli are ubiquitous across memory circuits and have now been observed in many medial temporal lobe structures as well as in the anterior cingulate cortex. This review summarizes behavioral and neurophysiological data that examine the representation of 3-dimensional objects across brain regions that are involved in memory. PMID:25205370

  17. Eye Contact Affects Object Representation in 9-Month-Old Infants

    PubMed Central

    Okumura, Yuko; Kobayashi, Tessei; Itakura, Shoji

    2016-01-01

    Social cues in interaction with others enable infants to extract useful information from their environment. Although previous research has shown that infants process and retain different information about an object depending on the presence of social cues, the effect of eye contact as an isolated independent variable has not been investigated. The present study investigated how eye contact affects infants’ object processing. Nine-month-olds engaged in two types of social interactions with an experimenter. When the experimenter showed an object without eye contact, the infants processed and remembered both the object’s location and its identity. In contrast, when the experimenter showed the object while making eye contact with the infant, the infant preferentially processed object’s identity but not its location. Such effects might assist infants to selectively attend to useful information. Our findings revealed that 9-month-olds’ object representations are modulated in accordance with the context, thus elucidating the function of eye contact for infants’ object representation. PMID:27776155

  18. Using the reassignment procedure to test object representation in pigeons and people.

    PubMed

    Peissig, Jessie J; Nagasaka, Yasuo; Young, Michael E; Wasserman, Edward A; Biederman, Irving

    2015-06-01

    In four experiments, we evaluated Lea's (1984) reassignment procedure for studying object representation in pigeons (Experiments 1-3) and humans (Experiment 4). In the initial phase of Experiment 1, pigeons were taught to make discriminative button responses to five views of each of four objects. Using the same set of buttons in the second phase, one view of each object was trained to a different button. In the final phase, the four views that had been withheld in the second stage were shown. In Experiment 2, pigeons were initially trained just like the birds in Experiment 1. Then, one view of each object was reassigned to a different button, now using a new set of four response buttons. In Experiment 3, the reassignment paradigm was again tested using the number of pecks to bind together different views of the same object. Across all three experiments, pigeons showed statistically significant generalization of the new response to the non-reassigned views, but such responding was well below that to the reassigned view. In Experiment 4, human participants were studied using the same stimuli and task as the pigeons in Experiment 1. People did strongly generalize the new response to the non-reassigned views. These results indicate that humans, but not pigeons, can employ a unified object representation that they can flexibly map to different responses under the reassignment procedure.

  19. Making the Invisible Visible: Enhancing Students' Conceptual Understanding by Introducing Representations of Abstract Objects in a Simulation

    ERIC Educational Resources Information Center

    Olympiou, Georgios; Zacharias, Zacharia; deJong, Ton

    2013-01-01

    This study aimed to identify if complementing representations of concrete objects with representations of abstract objects improves students' conceptual understanding as they use a simulation to experiment in the domain of "Light and Color". Moreover, we investigated whether students' prior knowledge is a factor that must be considered in deciding…

  20. Multi-class remote sensing object recognition based on discriminative sparse representation.

    PubMed

    Wang, Xin; Shen, Siqiu; Ning, Chen; Huang, Fengchen; Gao, Hongmin

    2016-02-20

    The automatic recognition of multi-class objects with various backgrounds is a big challenge in the field of remote sensing (RS) image analysis. In this paper, we propose a novel recognition framework for multi-class RS objects based on the discriminative sparse representation. In this framework, the recognition problem is implemented in two stages. In the first, or discriminative dictionary learning stage, considering the characterization of remote sensing objects, the scale-invariant feature transform descriptor is first combined with an improved bag-of-words model for multi-class objects feature extraction and representation. Then, information about each class of training samples is fused into the dictionary learning process; by using the K-singular value decomposition algorithm, a discriminative dictionary can be learned for sparse coding. In the second, or recognition, stage, to improve the computational efficiency, the phase spectrum of a quaternion Fourier transform model is applied to the test image to predict a small set of object candidate locations. Then, a multi-scale sliding window mechanism is utilized to scan the image over those candidate locations to obtain the object candidates (or objects of interest). Subsequently, the sparse coding coefficients of these candidates under the discriminative dictionary are mapped to the discriminative vectors that have a good ability to distinguish different classes of objects. Finally, multi-class object recognition can be accomplished by analyzing these vectors. The experimental results show that the proposed work outperforms a number of state-of-the-art methods for multi-class remote sensing object recognition.

  1. Multi-class remote sensing object recognition based on discriminative sparse representation.

    PubMed

    Wang, Xin; Shen, Siqiu; Ning, Chen; Huang, Fengchen; Gao, Hongmin

    2016-02-20

    The automatic recognition of multi-class objects with various backgrounds is a big challenge in the field of remote sensing (RS) image analysis. In this paper, we propose a novel recognition framework for multi-class RS objects based on the discriminative sparse representation. In this framework, the recognition problem is implemented in two stages. In the first, or discriminative dictionary learning stage, considering the characterization of remote sensing objects, the scale-invariant feature transform descriptor is first combined with an improved bag-of-words model for multi-class objects feature extraction and representation. Then, information about each class of training samples is fused into the dictionary learning process; by using the K-singular value decomposition algorithm, a discriminative dictionary can be learned for sparse coding. In the second, or recognition, stage, to improve the computational efficiency, the phase spectrum of a quaternion Fourier transform model is applied to the test image to predict a small set of object candidate locations. Then, a multi-scale sliding window mechanism is utilized to scan the image over those candidate locations to obtain the object candidates (or objects of interest). Subsequently, the sparse coding coefficients of these candidates under the discriminative dictionary are mapped to the discriminative vectors that have a good ability to distinguish different classes of objects. Finally, multi-class object recognition can be accomplished by analyzing these vectors. The experimental results show that the proposed work outperforms a number of state-of-the-art methods for multi-class remote sensing object recognition. PMID:26906591

  2. The Representation of Objects in Apraxia: From Action Execution to Error Awareness

    PubMed Central

    Canzano, Loredana; Scandola, Michele; Gobbetto, Valeria; Moretto, Giuseppe; D’Imperio, Daniela; Moro, Valentina

    2016-01-01

    Apraxia is a well-known syndrome characterized by the sufferer’s inability to perform routine gestures. In an attempt to understand the syndrome better, various different theories have been developed and a number of classifications of different subtypes have been proposed. In this article review, we will address these theories with a specific focus on how the use of objects helps us to better understand upper limb apraxia. With this aim, we will consider transitive vs. intransitive action dissociation as well as less frequent types of apraxia involving objects, i.e., constructive apraxia and magnetic apraxia. Pantomime and the imitation of objects in use are also considered with a view to dissociating the various different components involved in upper limb apraxia. Finally, we discuss the evidence relating to action recognition and awareness of errors in the execution of actions. Various different components concerning the use of objects emerge from our analysis and the results show that knowledge of an object and sensory-motor representations are supported by other functions such as spatial and body representations, executive functions and monitoring systems. PMID:26903843

  3. The Representation of Objects in Apraxia: From Action Execution to Error Awareness.

    PubMed

    Canzano, Loredana; Scandola, Michele; Gobbetto, Valeria; Moretto, Giuseppe; D'Imperio, Daniela; Moro, Valentina

    2016-01-01

    Apraxia is a well-known syndrome characterized by the sufferer's inability to perform routine gestures. In an attempt to understand the syndrome better, various different theories have been developed and a number of classifications of different subtypes have been proposed. In this article review, we will address these theories with a specific focus on how the use of objects helps us to better understand upper limb apraxia. With this aim, we will consider transitive vs. intransitive action dissociation as well as less frequent types of apraxia involving objects, i.e., constructive apraxia and magnetic apraxia. Pantomime and the imitation of objects in use are also considered with a view to dissociating the various different components involved in upper limb apraxia. Finally, we discuss the evidence relating to action recognition and awareness of errors in the execution of actions. Various different components concerning the use of objects emerge from our analysis and the results show that knowledge of an object and sensory-motor representations are supported by other functions such as spatial and body representations, executive functions and monitoring systems. PMID:26903843

  4. Using the Flow-3D General Moving Object Model to Simulate Coupled Liquid Slosh - Container Dynamics on the SPHERES Slosh Experiment: Aboard the International Space Station

    NASA Technical Reports Server (NTRS)

    Schulman, Richard; Kirk, Daniel; Marsell, Brandon; Roth, Jacob; Schallhorn, Paul

    2013-01-01

    The SPHERES Slosh Experiment (SSE) is a free floating experimental platform developed for the acquisition of long duration liquid slosh data aboard the International Space Station (ISS). The data sets collected will be used to benchmark numerical models to aid in the design of rocket and spacecraft propulsion systems. Utilizing two SPHERES Satellites, the experiment will be moved through different maneuvers designed to induce liquid slosh in the experiment's internal tank. The SSE has a total of twenty-four thrusters to move the experiment. In order to design slosh generating maneuvers, a parametric study with three maneuvers types was conducted using the General Moving Object (GMO) model in Flow-30. The three types of maneuvers are a translation maneuver, a rotation maneuver and a combined rotation translation maneuver. The effectiveness of each maneuver to generate slosh is determined by the deviation of the experiment's trajectory as compared to a dry mass trajectory. To fully capture the effect of liquid re-distribution on experiment trajectory, each thruster is modeled as an independent force point in the Flow-3D simulation. This is accomplished by modifying the total number of independent forces in the GMO model from the standard five to twenty-four. Results demonstrate that the most effective slosh generating maneuvers for all motions occurs when SSE thrusters are producing the highest changes in SSE acceleration. The results also demonstrate that several centimeters of trajectory deviation between the dry and slosh cases occur during the maneuvers; while these deviations seem small, they are measureable by SSE instrumentation.

  5. Sensing and compressing 3-D models

    SciTech Connect

    Krumm, J.

    1998-02-01

    The goal of this research project was to create a passive and robust computer vision system for producing 3-D computer models of arbitrary scenes. Although the authors were unsuccessful in achieving the overall goal, several components of this research have shown significant potential. Of particular interest is the application of parametric eigenspace methods for planar pose measurement of partially occluded objects in gray-level images. The techniques presented provide a simple, accurate, and robust solution to the planar pose measurement problem. In addition, the representational efficiency of eigenspace methods used with gray-level features were successfully extended to binary features, which are less sensitive to illumination changes. The results of this research are presented in two papers that were written during the course of this project. The papers are included in sections 2 and 3. The first section of this report summarizes the 3-D modeling efforts.

  6. Behavioral demand modulates object category representation in the inferior temporal cortex.

    PubMed

    Emadi, Nazli; Esteky, Hossein

    2014-11-15

    Visual object categorization is a critical task in our daily life. Many studies have explored category representation in the inferior temporal (IT) cortex at the level of single neurons and population. However, it is not clear how behavioral demands modulate this category representation. Here, we recorded from the IT single neurons in monkeys performing two different tasks with identical visual stimuli: passive fixation and body/object categorization. We found that category selectivity of the IT neurons was improved in the categorization compared with the passive task where reward was not contingent on image category. The category improvement was the result of larger rate enhancement for the preferred category and smaller response variability for both preferred and nonpreferred categories. These specific modulations in the responses of IT category neurons enhanced signal-to-noise ratio of the neural responses to discriminate better between the preferred and nonpreferred categories. Our results provide new insight into the adaptable category representation in the IT cortex, which depends on behavioral demands.

  7. Grammatical Gender and Mental Representation of Object: The Case of Musical Instruments.

    PubMed

    Vuksanović, Jasmina; Bjekić, Jovana; Radivojević, Natalija

    2015-08-01

    A body of research shows that grammatical gender, although an arbitrary category, is viewed as the system with its own meaning. However, the question remains to what extent does grammatical gender influence shaping our notions about objects when both verbal and visual information are available. Two experiments were conducted. The results obtained in Experiment 1 have shown that grammatical gender as a linguistic property of the pseudo-nouns used as names for musical instruments significantly affects people's representations about these instruments. The purpose of Experiment 2 was to examine how the representation of musical instruments will be shaped in the presence of both language and visual information. The results indicate that the co-existence of linguistic and visual information results in formation of concepts about selected instruments by all available information from both sources, thus suggesting that grammatical gender influences nonverbal concepts' forming, but has no privileged status in the matter.

  8. Grammatical Gender and Mental Representation of Object: The Case of Musical Instruments.

    PubMed

    Vuksanović, Jasmina; Bjekić, Jovana; Radivojević, Natalija

    2015-08-01

    A body of research shows that grammatical gender, although an arbitrary category, is viewed as the system with its own meaning. However, the question remains to what extent does grammatical gender influence shaping our notions about objects when both verbal and visual information are available. Two experiments were conducted. The results obtained in Experiment 1 have shown that grammatical gender as a linguistic property of the pseudo-nouns used as names for musical instruments significantly affects people's representations about these instruments. The purpose of Experiment 2 was to examine how the representation of musical instruments will be shaped in the presence of both language and visual information. The results indicate that the co-existence of linguistic and visual information results in formation of concepts about selected instruments by all available information from both sources, thus suggesting that grammatical gender influences nonverbal concepts' forming, but has no privileged status in the matter. PMID:24595378

  9. Eye fixation during multiple object attention is based on a representation of discrete spatial foci

    PubMed Central

    Fluharty, Meg; Jentzsch, Ines; Spitschan, Manuel; Vishwanath, Dhanraj

    2016-01-01

    We often look at and attend to several objects at once. How the brain determines where to point our eyes when we do this is poorly understood. Here we devised a novel paradigm to discriminate between different models of spatial selection guiding fixation. In contrast to standard static attentional tasks where the eye remains fixed at a predefined location, observers selected their own preferred fixation position while they tracked static targets that were arranged in specific geometric configurations and which changed identity over time. Fixations were best predicted by a representation of discrete spatial foci, not a polygonal grouping, simple 2-foci division of attention or a circular spotlight. Moreover, attentional performance was incompatible with serial selection. Together with previous studies, our findings are compatible with a view that attentional selection and fixation rely on shared spatial representations and suggest a more nuanced definition of overt vs. covert attention. PMID:27561413

  10. Object identification and imagination: an alternative to the meta-representational explanation of autism.

    PubMed

    Woodard, Cooper R; Van Reet, Jennifer

    2011-02-01

    Past research has focused on pretend play in infants with autism because it is considered an early manifestation of symbolic or imaginative thinking. Contradictory research findings have challenged the meta-representational model. The intent of this paper is to propose that pretend play is the behavioral manifestation of developing imaginative ability, the complexity of which is determined by the degree of progression from part-object/inanimate object to whole-object/human object identification. We propose that autism is the result of non-completion of this process to varying degrees. This not only affects early pretend play behaviors, but also later social, language, and cognitive skills derived from the level of imagination-based sophistication achieved during foundational periods available for early identification.

  11. Object identification and imagination: an alternative to the meta-representational explanation of autism.

    PubMed

    Woodard, Cooper R; Van Reet, Jennifer

    2011-02-01

    Past research has focused on pretend play in infants with autism because it is considered an early manifestation of symbolic or imaginative thinking. Contradictory research findings have challenged the meta-representational model. The intent of this paper is to propose that pretend play is the behavioral manifestation of developing imaginative ability, the complexity of which is determined by the degree of progression from part-object/inanimate object to whole-object/human object identification. We propose that autism is the result of non-completion of this process to varying degrees. This not only affects early pretend play behaviors, but also later social, language, and cognitive skills derived from the level of imagination-based sophistication achieved during foundational periods available for early identification. PMID:20532603

  12. Using fMR-Adaptation to Track Complex Object Representations in Perirhinal Cortex

    PubMed Central

    Rubin, Rachael D.; Chesney, Samantha; Cohen, Neal J.; Gonsalves, Brian D.

    2013-01-01

    Brain regions in medial temporal lobe have seen a shift in emphasis in their role in long-term declarative memory to an appreciation of their role in cognitive domains beyond declarative memory, such as implicit memory, working memory, and perception. Recent theoretical accounts emphasize the function of perirhinal cortex in terms of its role in the ventral visual stream. Here, we used functional magnetic resonance adaptation (fMRa) to show that brain structures in the visual processing stream can bind item features prior to the involvement of hippocampal binding mechanisms. Evidence for perceptual binding was assessed by comparing BOLD responses between fused objects and variants of the same object as different, non-fused forms (e.g. physically separate objects). Adaptation of the neural response to fused, but not non-fused, objects was in left fusiform cortex and left perirhinal cortex, indicating the involvement of these regions in the perceptual binding of item representations. PMID:23997832

  13. Location-independent and location-linked representations of sound objects.

    PubMed

    Bourquin, Nathalie M-P; Murray, Micah M; Clarke, Stephanie

    2013-06-01

    For the recognition of sounds to benefit perception and action, their neural representations should also encode their current spatial position and their changes in position over time. The dual-stream model of auditory processing postulates separate (albeit interacting) processing streams for sound meaning and for sound location. Using a repetition priming paradigm in conjunction with distributed source modeling of auditory evoked potentials, we determined how individual sound objects are represented within these streams. Changes in perceived location were induced by interaural intensity differences, and sound location was either held constant or shifted across initial and repeated presentations (from one hemispace to the other in the main experiment or between locations within the right hemispace in a follow-up experiment). Location-linked representations were characterized by differences in priming effects between pairs presented to the same vs. different simulated lateralizations. These effects were significant at 20-39 ms post-stimulus onset within a cluster on the posterior part of the left superior and middle temporal gyri; and at 143-162 ms within a cluster on the left inferior and middle frontal gyri. Location-independent representations were characterized by a difference between initial and repeated presentations, independently of whether or not their simulated lateralization was held constant across repetitions. This effect was significant at 42-63 ms within three clusters on the right temporo-frontal region; and at 165-215 ms in a large cluster on the left temporo-parietal convexity. Our results reveal two varieties of representations of sound objects within the ventral/What stream: one location-independent, as initially postulated in the dual-stream model, and the other location-linked. PMID:23357069

  14. An analysis of TA-Student Interaction and the Development of Concepts in 3-d Space Through Language, Objects, and Gesture in a College-level Geoscience Laboratory

    NASA Astrophysics Data System (ADS)

    King, S. L.

    2015-12-01

    The purpose of this study is twofold: 1) to describe how a teaching assistant (TA) in an undergraduate geology laboratory employs a multimodal system in order to mediate the students' understanding of scientific knowledge and develop a contextualization of a concept in three-dimensional space and 2) to describe how a linguistic awareness of gestural patterns can be used to inform TA training assessment of students' conceptual understanding in situ. During the study the TA aided students in developing the conceptual understanding and reconstruction of a meteoric impact, which produces shatter cone formations. The concurrent use of speech, gesture, and physical manipulation of objects is employed by the TA in order to aid the conceptual understanding of this particular phenomenon. Using the methods of gestural analysis in works by Goldin-Meadow, 2000 and McNeill, 1992, this study describes the gestures of the TA and the students as well as the purpose and motivation of the meditational strategies employed by TA in order to build the geological concept in the constructed 3-dimensional space. Through a series of increasingly complex gestures, the TA assists the students to construct the forensic concept of the imagined 3-D space, which can then be applied to a larger context. As the TA becomes more familiar with the students' meditational needs, the TA adapts teaching and gestural styles to meet their respective ZPDs (Vygotsky 1978). This study shows that in the laboratory setting language, gesture, and physical manipulation of the experimental object are all integral to the learning and demonstration of scientific concepts. Recognition of the gestural patterns of the students allows the TA the ability to dynamically assess the students understanding of a concept. Using the information from this example of student-TA interaction, a brief short course has been created to assist TAs in recognizing the mediational power as well as the assessment potential of gestural

  15. 3D imaging: how to achieve highest accuracy

    NASA Astrophysics Data System (ADS)

    Luhmann, Thomas

    2011-07-01

    The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are widely known from photogrammetry or computer vision approaches. The objectives of these methods differ, more or less, from high precision and well-structured measurements in (industrial) photogrammetry to fully-automated non-structured applications in computer vision. Accuracy and precision is a critical issue for the 3D measurement of industrial, engineering or medical objects. As state of the art, photogrammetric multi-view measurements achieve relative precisions in the order of 1:100000 to 1:200000, and relative accuracies with respect to retraceable lengths in the order of 1:50000 to 1:100000 of the largest object diameter. In order to obtain these figures a number of influencing parameters have to be optimized. These are, besides others: physical representation of object surface (targets, texture), illumination and light sources, imaging sensors, cameras and lenses, calibration strategies (camera model), orientation strategies (bundle adjustment), image processing of homologue features (target measurement, stereo and multi-image matching), representation of object or workpiece coordinate systems and object scale. The paper discusses the above mentioned parameters and offers strategies for obtaining highest accuracy in object space. Practical examples of high-quality stereo camera measurements and multi-image applications are used to prove the relevance of high accuracy in different applications, ranging from medical navigation to static and dynamic industrial measurements. In addition, standards for accuracy verifications are presented and demonstrated by practical examples

  16. fMRI-Adaptation Evidence of Overlapping Neural Representations for Objects Related in Function or Manipulation

    PubMed Central

    Yee, Eiling; Drucker, Daniel M.; Thompson-Schill, Sharon L.

    2010-01-01

    Sensorimotor-based theories of semantic memory contend that semantic information about an object is represented in the neural substrate invoked when we perceive or interact with it. We used fMRI adaptation to test this prediction, measuring brain activation as participants read pairs of words. Pairs shared function (flashlight–lantern), shape (marble–grape), both (pencil–pen), were unrelated (saucer–needle), or were identical (drill–drill). We observed adaptation for pairs with both function and shape similarity in left premotor cortex. Further, degree of function similarity was correlated with adaptation in three regions: two in the left temporal lobe (left medial temporal lobe, left middle temporal gyrus), which has been hypothesized to play a role in mutimodal integration, and one in left superior frontal gyrus. We also found that degree of manipulation (i.e., action) and function similarity were both correlated with adaptation in two regions: left premotor cortex and left intraparietal sulcus (involved in guiding actions). Additional considerations suggest that the adaptation in these two regions was driven by manipulation similarity alone; thus, these results imply that manipulation information about objects is encoded in brain regions involved in performing or guiding actions. Unexpectedly, these same two regions showed increased activation (rather than adaptation) for objects similar in shape. Overall, we found evidence (in the form of adaptation) that objects that share semantic features have overlapping representations. Further, the particular regions of overlap provide support for the existence of both sensorimotor and amodal/multimodal representations. PMID:20034582

  17. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis. Revision 1.12

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1997-01-01

    We proposed a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and is required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has two important applications, which we term the assessment application and the objective analysis application. For the assessment application, our approach results in new objective measures of forecast skill which are more in line with subjective measures of forecast skill and which are useful in validating models and diagnosing their shortcomings. With regard to the objective analysis application, meteorological analysis schemes balance forecast error and observational error to obtain an optimal analysis. Presently, representations of the error covariance matrix used to measure the forecast error are severely limited. For the objective analysis application our approach will improve analyses by providing a more realistic measure of the forecast error. We expect, a priori, that our approach should greatly improve the utility of remotely sensed data which have relatively high horizontal resolution, but which are indirectly related to the conventional atmospheric variables. In this project, we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP) and 500 hPa geopotential height fields for forecasts of the short and medium range. Since the forecasts are generated by the GEOS (Goddard Earth Observing System) data assimilation system with and without ERS 1 scatterometer data, these preliminary studies serve several purposes. They (1) provide a

  18. Part A: Investigations of the Synthesis of Pyrazinochlorins and Other Porphyrin Derivatives. Part B: investigations of Student Translation Between 2-D/3-D Representations of Molecules

    NASA Astrophysics Data System (ADS)

    Dean, Michelle L.

    This dissertation will be composed of two parts. The first part was completed under the direction of Dr. Christian Bruckner and outlines the synthesis of porphyrins and related derivatives. It explores specifically the synthesis of pyrazinoporphyrin, a pyrrole-modified porphyrin, the use of microwaves for porphyrin synthesis, and the synthesis of a novel building block for use in an expanded porphyrin structure. Lastly, this part will describe a laboratory experiment, suitable for an organic chemistry course, which investigates the photophysical properties of porphyrins using brown eggs as a source of protoporphyrin IX. The second part, under the advisement of Dr. Tyson Miller, will detail research conducted on students' ability to translate between two-dimensional and three-dimensional representations of molecules. Using the Grounded Theory and a formal interview it was investigated what errors students make as they translate from a two-dimensional drawing to a three-dimensional model, and visa versa. This part also seeks to gain an understanding, through the use of phenomenography what was factors contribute to cognitive overload when drawing chiral centers.

  19. The use of a low-cost visible light 3D scanner to create virtual reality environment models of actors and objects

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2015-05-01

    A low-cost 3D scanner has been developed with a parts cost of approximately USD $5,000. This scanner uses visible light sensing to capture both structural as well as texture and color data of a subject. This paper discusses the use of this type of scanner to create 3D models for incorporation into a virtual reality environment. It describes the basic scanning process (which takes under a minute for a single scan), which can be repeated to collect multiple positions, if needed for actor model creation. The efficacy of visible light versus other scanner types is also discussed.

  20. Improving 3d Spatial Queries Search: Newfangled Technique of Space Filling Curves in 3d City Modeling

    NASA Astrophysics Data System (ADS)

    Uznir, U.; Anton, F.; Suhaibah, A.; Rahman, A. A.; Mioc, D.

    2013-09-01

    The advantages of three dimensional (3D) city models can be seen in various applications including photogrammetry, urban and regional planning, computer games, etc.. They expand the visualization and analysis capabilities of Geographic Information Systems on cities, and they can be developed using web standards. However, these 3D city models consume much more storage compared to two dimensional (2D) spatial data. They involve extra geometrical and topological information together with semantic data. Without a proper spatial data clustering method and its corresponding spatial data access method, retrieving portions of and especially searching these 3D city models, will not be done optimally. Even though current developments are based on an open data model allotted by the Open Geospatial Consortium (OGC) called CityGML, its XML-based structure makes it challenging to cluster the 3D urban objects. In this research, we propose an opponent data constellation technique of space-filling curves (3D Hilbert curves) for 3D city model data representation. Unlike previous methods, that try to project 3D or n-dimensional data down to 2D or 3D using Principal Component Analysis (PCA) or Hilbert mappings, in this research, we extend the Hilbert space-filling curve to one higher dimension for 3D city model data implementations. The query performance was tested using a CityGML dataset of 1,000 building blocks and the results are presented in this paper. The advantages of implementing space-filling curves in 3D city modeling will improve data retrieval time by means of optimized 3D adjacency, nearest neighbor information and 3D indexing. The Hilbert mapping, which maps a subinterval of the [0, 1] interval to the corresponding portion of the d-dimensional Hilbert's curve, preserves the Lebesgue measure and is Lipschitz continuous. Depending on the applications, several alternatives are possible in order to cluster spatial data together in the third dimension compared to its

  1. Single and Multiple Object Tracking Using a Multi-Feature Joint Sparse Representation.

    PubMed

    Hu, Weiming; Li, Wei; Zhang, Xiaoqin; Maybank, Stephen

    2015-04-01

    In this paper, we propose a tracking algorithm based on a multi-feature joint sparse representation. The templates for the sparse representation can include pixel values, textures, and edges. In the multi-feature joint optimization, noise or occlusion is dealt with using a set of trivial templates. A sparse weight constraint is introduced to dynamically select the relevant templates from the full set of templates. A variance ratio measure is adopted to adaptively adjust the weights of different features. The multi-feature template set is updated adaptively. We further propose an algorithm for tracking multi-objects with occlusion handling based on the multi-feature joint sparse reconstruction. The observation model based on sparse reconstruction automatically focuses on the visible parts of an occluded object by using the information in the trivial templates. The multi-object tracking is simplified into a joint Bayesian inference. The experimental results show the superiority of our algorithm over several state-of-the-art tracking algorithms. PMID:26353296

  2. Three Dimensional (3D) Printing: A Straightforward, User-Friendly Protocol to Convert Virtual Chemical Models to Real-Life Objects

    ERIC Educational Resources Information Center

    Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel

    2015-01-01

    A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…

  3. Planning 3-D collision-free paths using spheres

    NASA Technical Reports Server (NTRS)

    Bonner, Susan; Kelley, Robert B.

    1989-01-01

    A scheme for the representation of objects, the Successive Spherical Approximation (SSA), facilitates the rapid planning of collision-free paths in a 3-D, dynamic environment. The hierarchical nature of the SSA allows collision-free paths to be determined efficiently while still providing for the exact representation of dynamic objects. The concept of a freespace cell is introduced to allow human 3-D conceptual knowledge to be used in facilitating satisfying choices for paths. Collisions can be detected at a rate better than 1 second per environment object per path. This speed enables the path planning process to apply a hierarchy of rules to create a heuristically satisfying collision-free path.

  4. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  5. Virtual 3d City Modeling: Techniques and Applications

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  6. Space-dependent representation of objects and other's action in monkey ventral premotor grasping neurons.

    PubMed

    Bonini, Luca; Maranesi, Monica; Livi, Alessandro; Fogassi, Leonardo; Rizzolatti, Giacomo

    2014-03-12

    The macaque ventral premotor area F5 hosts two types of visuomotor grasping neurons: "canonical" neurons, which respond to visually presented objects and underlie visuomotor transformation for grasping, and "mirror" neurons, which respond during the observation of others' action, likely playing a role in action understanding. Some previous evidence suggested that canonical and mirror neurons could be anatomically segregated in different sectors of area F5. Here we investigated the functional properties of single neurons in the hand field of area F5 using various tasks similar to those originally designed to investigate visual responses to objects and actions. By using linear multielectrode probes, we were able to simultaneously record different types of neurons and to precisely localize their cortical depth. We recorded 464 neurons, of which 243 showed visuomotor properties. Canonical and mirror neurons were often present in the same cortical sites; and, most interestingly, a set of neurons showed both canonical and mirror properties, discharging to object presentation as well as during the observation of experimenter's goal-directed acts (canonical-mirror neurons). Typically, visual responses to objects were constrained to the monkey peripersonal space, whereas action observation responses were less space-selective. Control experiments showed that space-constrained coding of objects mostly relies on an operational (action possibility) rather than metric (absolute distance) reference frame. Interestingly, canonical-mirror neurons appear to code object as target for both one's own and other's action, suggesting that they could play a role in predictive representation of others' impending actions.

  7. Multitask joint spatial pyramid matching using sparse representation with dynamic coefficients for object recognition

    NASA Astrophysics Data System (ADS)

    Hajigholam, Mohammad-Hossein; Raie, Abolghasem-Asadollah; Faez, Karim

    2016-03-01

    Object recognition is considered a necessary part in many computer vision applications. Recently, sparse coding methods, based on representing a sparse feature from an image, show remarkable results on several object recognition benchmarks, but the precision obtained by these methods is not yet sufficient. Such a problem arises where there are few training images available. As such, using multiple features and multitask dictionaries appears to be crucial to achieving better results. We use multitask joint sparse representation, using dynamic coefficients to connect these sparse features. In other words, we calculate the importance of each feature for each class separately. This causes the features to be used efficiently and appropriately for each class. Thus, we use variance of features and particle swarm optimization algorithms to obtain these dynamic coefficients. Experimental results of our work on Caltech-101 and Caltech-256 databases show more accuracy compared with state-of-the art ones on the same databases.

  8. Customised 3D Printing: An Innovative Training Tool for the Next Generation of Orbital Surgeons.

    PubMed

    Scawn, Richard L; Foster, Alex; Lee, Bradford W; Kikkawa, Don O; Korn, Bobby S

    2015-01-01

    Additive manufacturing or 3D printing is the process by which three dimensional data fields are translated into real-life physical representations. 3D printers create physical printouts using heated plastics in a layered fashion resulting in a three-dimensional object. We present a technique for creating customised, inexpensive 3D orbit models for use in orbital surgical training using 3D printing technology. These models allow trainee surgeons to perform 'wet-lab' orbital decompressions and simulate upcoming surgeries on orbital models that replicate a patient's bony anatomy. We believe this represents an innovative training tool for the next generation of orbital surgeons.

  9. The Structure of Three-Dimensional Object Representations in Human Vision: Evidence from Whole-Part Matching

    ERIC Educational Resources Information Center

    Leek, E. Charles; Reppa, Irene; Arguin, Martin

    2005-01-01

    This article examines how the human visual system represents the shapes of 3-dimensional (3D) objects. One long-standing hypothesis is that object shapes are represented in terms of volumetric component parts and their spatial configuration. This hypothesis is examined in 3 experiments using a whole-part matching paradigm in which participants…

  10. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible.

  11. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The VAX/VMS/DISSPLA implementation of PLOT3D supports 2-D polygons as

  12. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P. G.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The VAX/VMS/DISSPLA implementation of PLOT3D supports 2-D polygons as

  13. You shall know an object by the company it keeps: An investigation of semantic representations derived from object co-occurrence in visual scenes

    PubMed Central

    Sadeghi, Zahra; McClelland, James L.; Hoffman, Paul

    2015-01-01

    An influential position in lexical semantics holds that semantic representations for words can be derived through analysis of patterns of lexical co-occurrence in large language corpora. Firth (1957) famously summarised this principle as “you shall know a word by the company it keeps”. We explored whether the same principle could be applied to non-verbal patterns of object co-occurrence in natural scenes. We performed latent semantic analysis (LSA) on a set of photographed scenes in which all of the objects present had been manually labelled. This resulted in a representation of objects in a high-dimensional space in which similarity between two objects indicated the degree to which they appeared in similar scenes. These representations revealed similarities among objects belonging to the same taxonomic category (e.g., items of clothing) as well as cross-category associations (e.g., between fruits and kitchen utensils). We also compared representations generated from this scene dataset with two established methods for elucidating semantic representations: (a) a published database of semantic features generated verbally by participants and (b) LSA applied to a linguistic corpus in the usual fashion. Statistical comparisons of the three methods indicated significant association between the structures revealed by each method, with the scene dataset displaying greater convergence with feature-based representations than did LSA applied to linguistic data. The results indicate that information about the conceptual significance of objects can be extracted from their patterns of co-occurrence in natural environments, opening the possibility for such data to be incorporated into existing models of conceptual representation. PMID:25196838

  14. You shall know an object by the company it keeps: An investigation of semantic representations derived from object co-occurrence in visual scenes.

    PubMed

    Sadeghi, Zahra; McClelland, James L; Hoffman, Paul

    2015-09-01

    An influential position in lexical semantics holds that semantic representations for words can be derived through analysis of patterns of lexical co-occurrence in large language corpora. Firth (1957) famously summarised this principle as "you shall know a word by the company it keeps". We explored whether the same principle could be applied to non-verbal patterns of object co-occurrence in natural scenes. We performed latent semantic analysis (LSA) on a set of photographed scenes in which all of the objects present had been manually labelled. This resulted in a representation of objects in a high-dimensional space in which similarity between two objects indicated the degree to which they appeared in similar scenes. These representations revealed similarities among objects belonging to the same taxonomic category (e.g., items of clothing) as well as cross-category associations (e.g., between fruits and kitchen utensils). We also compared representations generated from this scene dataset with two established methods for elucidating semantic representations: (a) a published database of semantic features generated verbally by participants and (b) LSA applied to a linguistic corpus in the usual fashion. Statistical comparisons of the three methods indicated significant association between the structures revealed by each method, with the scene dataset displaying greater convergence with feature-based representations than did LSA applied to linguistic data. The results indicate that information about the conceptual significance of objects can be extracted from their patterns of co-occurrence in natural environments, opening the possibility for such data to be incorporated into existing models of conceptual representation.

  15. Benefits of an Object-oriented Database Representation for Controlled Medical Terminologies

    PubMed Central

    Gu, Huanying; Halper, Michael; Geller, James; Perl, Yehoshua

    1999-01-01

    Objective: Controlled medical terminologies (CMTs) have been recognized as important tools in a variety of medical informatics applications, ranging from patient-record systems to decision-support systems. Controlled medical terminologies are typically organized in semantic network structures consisting of tens to hundreds of thousands of concepts. This overwhelming size and complexity can be a serious barrier to their maintenance and widespread utilization. The authors propose the use of object-oriented databases to address the problems posed by the extensive scope and high complexity of most CMTs for maintenance personnel and general users alike. Design: The authors present a methodology that allows an existing CMT, modeled as a semantic network, to be represented as an equivalent object-oriented database. Such a representation is called an object-oriented health care terminology repository (OOHTR). Results: The major benefit of an OOHTR is its schema, which provides an important layer of structural abstraction. Using the high-level view of a CMT afforded by the schema, one can gain insight into the CMT's overarching organization and begin to better comprehend it. The authors' methodology is applied to the Medical Entities Dictionary (MED), a large CMT developed at Columbia-Presbyterian Medical Center. Examples of how the OOHTR schema facilitated updating, correcting, and improving the design of the MED are presented. Conclusion: The OOHTR schema can serve as an important abstraction mechanism for enhancing comprehension of a large CMT, and thus promotes its usability. PMID:10428002

  16. Improving Semantic Updating Method on 3d City Models Using Hybrid Semantic-Geometric 3d Segmentation Technique

    NASA Astrophysics Data System (ADS)

    Sharkawi, K.-H.; Abdul-Rahman, A.

    2013-09-01

    Cities and urban areas entities such as building structures are becoming more complex as the modern human civilizations continue to evolve. The ability to plan and manage every territory especially the urban areas is very important to every government in the world. Planning and managing cities and urban areas based on printed maps and 2D data are getting insufficient and inefficient to cope with the complexity of the new developments in big cities. The emergence of 3D city models have boosted the efficiency in analysing and managing urban areas as the 3D data are proven to represent the real world object more accurately. It has since been adopted as the new trend in buildings and urban management and planning applications. Nowadays, many countries around the world have been generating virtual 3D representation of their major cities. The growing interest in improving the usability of 3D city models has resulted in the development of various tools for analysis based on the 3D city models. Today, 3D city models are generated for various purposes such as for tourism, location-based services, disaster management and urban planning. Meanwhile, modelling 3D objects are getting easier with the emergence of the user-friendly tools for 3D modelling available in the market. Generating 3D buildings with high accuracy also has become easier with the availability of airborne Lidar and terrestrial laser scanning equipments. The availability and accessibility to this technology makes it more sensible to analyse buildings in urban areas using 3D data as it accurately represent the real world objects. The Open Geospatial Consortium (OGC) has accepted CityGML specifications as one of the international standards for representing and exchanging spatial data, making it easier to visualize, store and manage 3D city models data efficiently. CityGML able to represents the semantics, geometry, topology and appearance of 3D city models in five well-defined Level-of-Details (LoD), namely LoD0

  17. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The UNIX/DISSPLA implementation of PLOT3D supports 2-D polygons as

  18. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The UNIX/DISSPLA implementation of PLOT3D supports 2-D polygons as

  19. Hebbian learning reconsidered: representation of static and dynamic objects in associative neural nets.

    PubMed

    Herz, A; Sulzer, B; Kühn, R; van Hemmen, J L

    1989-01-01

    According to Hebb's postulate for learning, information presented to a neural net during a learning session is stored in synaptic efficacies. Long-term potentiation occurs only if the postsynaptic neuron becomes active in a time window set up by the presynaptic one. We carefully interpret and mathematically implement the Hebb rule so as to handle both stationary and dynamic objects such as single patterns and cycles. Since the natural dynamics contains a rather broad distribution of delays, the key idea is to incorporate these delays in the learning session. As theory and numerical simulations show, the resulting procedure is surprisingly robust and faithful. It also turns out the pure Hebbian learning is by selection: the network produces synaptic representations that are selected according to their resonance with the input percepts.

  20. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  1. Object Oriented Programming Systems (OOPS) and frame representations: An investigation of programming paradigms

    NASA Technical Reports Server (NTRS)

    Auty, David

    1988-01-01

    The project was initiated to research Object Oriented Programming Systems (OOPS) and frame representation systems, their significance and applicability, and their implementation in or relationship to Ada. Object orientated is currently a very popular conceptual adjective. Object oriented programming, in particular, is promoted as a particularly productive approach to programming; an approach which maximizes opportunities for code reuse and lends itself to the definition of convenient and well-developed units. Such units are thus expected to be usable in a variety of situations, beyond the typical highly specific unit development of other approaches. Frame represenation systems share a common heritage and similar conceptual foundations. Together they represent a quickly emerging alternative approach to programming. The approach is to first define the terms, starting with relevant concepts and using these to put bounds on what is meant by OOPS and Frames. From this the possibilities were pursued to merge OOPS with Ada which will further elucidate the significant characteristics which make up this programming approach. Finally, some of the merits and demerits of OOPS were briefly considered as a way of addressing the applicability of OOPS to various programming tasks.

  2. Improving automated 3D reconstruction methods via vision metrology

    NASA Astrophysics Data System (ADS)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  3. 3D numerical test objects for the evaluation of a software used for an automatic analysis of a linear accelerator mechanical stability

    NASA Astrophysics Data System (ADS)

    Torfeh, Tarraf; Beaumont, Stéphane; Guédon, Jeanpierre; Benhdech, Yassine

    2010-04-01

    Mechanical stability of a medical LINear ACcelerator (LINAC), particularly the quality of the gantry, collimator and table rotations and the accuracy of the isocenter position, are crucial for the radiation therapy process, especially in stereotactic radio surgery and in Image Guided Radiation Therapy (IGRT) where this mechanical stability is perturbed due to the additional weight the kV x-ray tube and detector. In this paper, we present a new method to evaluate a software which is used to perform an automatic measurement of the "size" (flex map) and the location of the kV and the MV isocenters of the linear accelerator. The method consists of developing a complete numerical 3D simulation of a LINAC and physical phantoms in order to produce Electronic Portal Imaging Device (EPID) images including calibrated distortions of the mechanical movement of the gantry and isocenter misalignments.

  4. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  5. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  6. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  7. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In addition to providing the advantages of performing complex

  8. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In addition to providing the advantages of performing complex

  9. Automatic frame-centered object representation and integration revealed by iconic memory, visual priming, and backward masking

    PubMed Central

    Lin, Zhicheng; He, Sheng

    2012-01-01

    Object identities (“what”) and their spatial locations (“where”) are processed in distinct pathways in the visual system, raising the question of how the what and where information is integrated. Because of object motions and eye movements, the retina-based representations are unstable, necessitating nonretinotopic representation and integration. A potential mechanism is to code and update objects according to their reference frames (i.e., frame-centered representation and integration). To isolate frame-centered processes, in a frame-to-frame apparent motion configuration, we (a) presented two preceding or trailing objects on the same frame, equidistant from the target on the other frame, to control for object-based (frame-based) effect and space-based effect, and (b) manipulated the target's relative location within its frame to probe frame-centered effect. We show that iconic memory, visual priming, and backward masking depend on objects' relative frame locations, orthogonal of the retinotopic coordinate. These findings not only reveal that iconic memory, visual priming, and backward masking can be nonretinotopic but also demonstrate that these processes are automatically constrained by contextual frames through a frame-centered mechanism. Thus, object representation is robustly and automatically coupled to its reference frame and continuously being updated through a frame-centered, location-specific mechanism. These findings lead to an object cabinet framework, in which objects (“files”) within the reference frame (“cabinet”) are orderly coded relative to the frame. PMID:23104817

  10. Converting an integrated hospital formulary into an object-oriented database representation.

    PubMed

    Gu, H; Liu, L M; Halper, M; Geller, J; Perl, Y

    1998-01-01

    Controlled Medical Vocabularies (CMVs) have proven to be extremely useful in their support of the tasks of information sharing and integration, communication among various software applications, and decision support. Modeling a CMV as an Object-Oriented Database (OODB) provides additional benefits such as increased support for vocabulary comprehension and flexible access. In this paper, we describe the process of modeling and converting an existing integrated hospital formulary (i.e., set of pharmacological concepts) into an equivalent OODB representation, which, in general, we refer to as an Object-Oriented Healthcare Vocabulary Repository (OOHVR). The source for our example OOHVR is a formulary provided by the Connecticut Healthcare Research and Education Foundation (CHREF). Utilizing this source formulary together with the semantic hierarchy composed of major and minor drug classes defined as part of the National Drug Code (NDC) directory, we constructed a CMV that was eventually converted into its OOHVR form (the CHREF-OOHVR). The actual conversion step was carried out automatically by a program, called the OOHVR Generator, that we have developed. At present, the CHREF-OOHVR is running on top of ONTOS, a commercial OODB management system, and is accessible on the Web. PMID:9929323

  11. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In each of these areas, the IRIS implementation of PLOT3D offers

  12. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In each of these areas, the IRIS implementation of PLOT3D offers

  13. 3D Game Content Distributed Adaptation in Heterogeneous Environments

    NASA Astrophysics Data System (ADS)

    Morán, Francisco; Preda, Marius; Lafruit, Gauthier; Villegas, Paulo; Berretty, Robert-Paul

    2007-12-01

    Most current multiplayer 3D games can only be played on a single dedicated platform (a particular computer, console, or cell phone), requiring specifically designed content and communication over a predefined network. Below we show how, by using signal processing techniques such as multiresolution representation and scalable coding for all the components of a 3D graphics object (geometry, texture, and animation), we enable online dynamic content adaptation, and thus delivery of the same content over heterogeneous networks to terminals with very different profiles, and its rendering on them. We present quantitative results demonstrating how the best displayed quality versus computational complexity versus bandwidth tradeoffs have been achieved, given the distributed resources available over the end-to-end content delivery chain. Additionally, we use state-of-the-art, standardised content representation and compression formats (MPEG-4 AFX, JPEG 2000, XML), enabling deployment over existing infrastructure, while keeping hooks to well-established practices in the game industry.

  14. 3-D volumetric computed tomographic scoring as an objective outcome measure for chronic rhinosinusitis: Clinical correlations and comparison to Lund-Mackay scoring

    PubMed Central

    Pallanch, John; Yu, Lifeng; Delone, David; Robb, Rich; Holmes, David R.; Camp, Jon; Edwards, Phil; McCollough, Cynthia H.; Ponikau, Jens; Dearking, Amy; Lane, John; Primak, Andrew; Shinkle, Aaron; Hagan, John; Frigas, Evangelo; Ocel, Joseph J.; Tombers, Nicole; Siwani, Rizwan; Orme, Nicholas; Reed, Kurtis; Jerath, Nivedita; Dhillon, Robinder; Kita, Hirohito

    2014-01-01

    Background We aimed to test the hypothesis that 3-D volume-based scoring of computed tomographic (CT) images of the paranasal sinuses was superior to Lund-Mackay CT scoring of disease severity in chronic rhinosinusitis (CRS). We determined correlation between changes in CT scores (using each scoring system) with changes in other measures of disease severity (symptoms, endoscopic scoring, and quality of life) in patients with CRS treated with triamcinolone. Methods The study group comprised 48 adult subjects with CRS. Baseline symptoms and quality of life were assessed. Endoscopy and CT scans were performed. Patients received a single systemic dose of intramuscular triamcinolone and were reevaluated 1 month later. Strengths of the correlations between changes in CT scores and changes in CRS signs and symptoms and quality of life were determined. Results We observed some variability in degree of improvement for the different symptom, endoscopic, and quality-of-life parameters after treatment. Improvement of parameters was significantly correlated with improvement in CT disease score using both CT scoring methods. However, volumetric CT scoring had greater correlation with these parameters than Lund-Mackay scoring. Conclusion Volumetric scoring exhibited higher degree of correlation than Lund-Mackay scoring when comparing improvement in CT score with improvement in score for symptoms, endoscopic exam, and quality of life in this group of patients who received beneficial medical treatment for CRS. PMID:24106202

  15. Lateralized Effects of Categorical and Coordinate Spatial Processing of Component Parts on the Recognition of 3D Non-Nameable Objects

    ERIC Educational Resources Information Center

    Saneyoshi, Ayako; Michimata, Chikashi

    2009-01-01

    Participants performed two object-matching tasks for novel, non-nameable objects consisting of geons. For each original stimulus, two transformations were applied to create comparison stimuli. In the categorical transformation, a geon connected to geon A was moved to geon B. In the coordinate transformation, a geon connected to geon A was moved to…

  16. SNL3dFace

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial featuresmore » of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.« less

  17. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  18. 3D Printing and Its Urologic Applications

    PubMed Central

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology. PMID:26028997

  19. 3D Printing and Its Urologic Applications.

    PubMed

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology.

  20. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In addition to providing the advantages of performing complex

  1. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In each of these areas, the IRIS implementation of PLOT3D offers

  2. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In each of these areas, the IRIS implementation of PLOT3D offers

  3. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In addition to providing the advantages of performing complex

  4. Individual 3D region-of-interest atlas of the human brain: knowledge-based class image analysis for extraction of anatomical objects

    NASA Astrophysics Data System (ADS)

    Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Sabri, Osama; Buell, Udalrich

    2000-06-01

    After neural network-based classification of tissue types, the second step of atlas extraction is knowledge-based class image analysis to get anatomically meaningful objects. Basic algorithms are region growing, mathematical morphology operations, and template matching. A special algorithm was designed for each object. The class label of each voxel and the knowledge about the relative position of anatomical objects to each other and to the sagittal midplane of the brain can be utilized for object extraction. User interaction is only necessary to define starting, mid- and end planes for most object extractions and to determine the number of iterations for erosion and dilation operations. Extraction can be done for the following anatomical brain regions: cerebrum; cerebral hemispheres; cerebellum; brain stem; white matter (e.g., centrum semiovale); gray matter [cortex, frontal, parietal, occipital, temporal lobes, cingulum, insula, basal ganglia (nuclei caudati, putamen, thalami)]. For atlas- based quantification of functional data, anatomical objects can be convoluted with the point spread function of functional data to take into account the different resolutions of morphological and functional modalities. This method allows individual atlas extraction from MRI image data of a patient without the need of warping individual data to an anatomical or statistical MRI brain atlas.

  5. Decomposing and Connecting Object Representations in 5- to 9-Year-Old Children's Drawing Behaviour

    ERIC Educational Resources Information Center

    Picard, Delphine; Vinter, Annie

    2006-01-01

    This study aimed at specifying the content of the representational redescription (RR) process assumed by Karmiloff-Smith (1992) with respect to the emergence of inter-representational flexibility in children's drawing behaviour. We hypothesized that the RR process included part-whole decomposition processes that are essential to the ability to…

  6. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1998-01-01

    We proposed a novel characterization of errors for numerical weather predictions. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has several important applications, including the model assessment application and the objective analysis application. In this project, we have focused on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP), the 500 hPa geopotential height, and the 315 K potential vorticity fields for forecasts of the short and medium range. The forecasts are generated by the Goddard Earth Observing System (GEOS) data assimilation system with and without ERS-1 scatterometer data. A great deal of novel work has been accomplished under the current contract. In broad terms, we have developed and tested an efficient algorithm for determining distortions. The algorithm and constraints are now ready for application to larger data sets to be used to determine the statistics of the distortion as outlined above, and to be applied in data analysis by using GEOS water vapor imagery to correct short-term forecast errors.

  7. Neonatal representation of odour objects: distinct memories of the whole and its parts.

    PubMed

    Coureaud, Gérard; Thomas-Danguin, Thierry; Wilson, Donald A; Ferreira, Guillaume

    2014-08-22

    Extraction of relevant information from highly complex environments is a prerequisite to survival. Within odour mixtures, such information is contained in the odours of specific elements or in the mixture configuration perceived as a whole unique odour. For instance, an AB mixture of the element A (ethyl isobutyrate) and the element B (ethyl maltol) generates a configural AB percept in humans and apparently in another species, the rabbit. Here, we examined whether the memory of such a configuration is distinct from the memory of the individual odorants. Taking advantage of the newborn rabbit's ability to learn odour mixtures, we combined behavioural and pharmacological tools to specifically eliminate elemental memory of A and B after conditioning to the AB mixture and evaluate consequences on configural memory of AB. The amnesic treatment suppressed responsiveness to A and B but not to AB. Two other experiments confirmed the specific perception and particular memory of the AB mixture. These data demonstrate the existence of configurations in certain odour mixtures and their representation as unique objects: after learning, animals form a configural memory of these mixtures, which coexists with, but is relatively dissociated from, memory of their elements. This capability emerges very early in life. PMID:24990670

  8. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1996-01-01

    We study a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and will be required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and a bias correction of forecast anomalies. In brief, the distortion is determined by minimizing the objective function by varying the displacement and bias correction fields. In the present project we use a global or hemispheric domain, and spherical harmonics to represent these fields. In this project we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically we study the forecast errors of the 500 hPa geopotential height field for forecasts of the short and medium range. The forecasts are those of the Goddard Earth Observing System data assimilation system. Results presented show that the methodology works, that a large part of the total error may be explained by a distortion limited to triangular truncation at wavenumber 10, and that the remaining residual error contains mostly small spatial scales.

  9. 3-D Maps and Compasses in the Brain.

    PubMed

    Finkelstein, Arseny; Las, Liora; Ulanovsky, Nachum

    2016-07-01

    The world has a complex, three-dimensional (3-D) spatial structure, but until recently the neural representation of space was studied primarily in planar horizontal environments. Here we review the emerging literature on allocentric spatial representations in 3-D and discuss the relations between 3-D spatial perception and the underlying neural codes. We suggest that the statistics of movements through space determine the topology and the dimensionality of the neural representation, across species and different behavioral modes. We argue that hippocampal place-cell maps are metric in all three dimensions, and might be composed of 2-D and 3-D fragments that are stitched together into a global 3-D metric representation via the 3-D head-direction cells. Finally, we propose that the hippocampal formation might implement a neural analogue of a Kalman filter, a standard engineering algorithm used for 3-D navigation. PMID:27442069

  10. 3D Position and Velocity Vector Computations of Objects Jettisoned from the International Space Station Using Close-Range Photogrammetry Approach

    NASA Technical Reports Server (NTRS)

    Papanyan, Valeri; Oshle, Edward; Adamo, Daniel

    2008-01-01

    Measurement of the jettisoned object departure trajectory and velocity vector in the International Space Station (ISS) reference frame is vitally important for prompt evaluation of the object s imminent orbit. We report on the first successful application of photogrammetric analysis of the ISS imagery for the prompt computation of the jettisoned object s position and velocity vectors. As post-EVA analyses examples, we present the Floating Potential Probe (FPP) and the Russian "Orlan" Space Suit jettisons, as well as the near-real-time (provided in several hours after the separation) computations of the Video Stanchion Support Assembly Flight Support Assembly (VSSA-FSA) and Early Ammonia Servicer (EAS) jettisons during the US astronauts space-walk. Standard close-range photogrammetry analysis was used during this EVA to analyze two on-board camera image sequences down-linked from the ISS. In this approach the ISS camera orientations were computed from known coordinates of several reference points on the ISS hardware. Then the position of the jettisoned object for each time-frame was computed from its image in each frame of the video-clips. In another, "quick-look" approach used in near-real time, orientation of the cameras was computed from their position (from the ISS CAD model) and operational data (pan and tilt) then location of the jettisoned object was calculated only for several frames of the two synchronized movies. Keywords: Photogrammetry, International Space Station, jettisons, image analysis.

  11. bioWeb3D: an online webGL 3D data visualisation tool

    PubMed Central

    2013-01-01

    Background Data visualization is critical for interpreting biological data. However, in practice it can prove to be a bottleneck for non trained researchers; this is especially true for three dimensional (3D) data representation. Whilst existing software can provide all necessary functionalities to represent and manipulate biological 3D datasets, very few are easily accessible (browser based), cross platform and accessible to non-expert users. Results An online HTML5/WebGL based 3D visualisation tool has been developed to allow biologists to quickly and easily view interactive and customizable three dimensional representations of their data along with multiple layers of information. Using the WebGL library Three.js written in Javascript, bioWeb3D allows the simultaneous visualisation of multiple large datasets inputted via a simple JSON, XML or CSV file, which can be read and analysed locally thanks to HTML5 capabilities. Conclusions Using basic 3D representation techniques in a technologically innovative context, we provide a program that is not intended to compete with professional 3D representation software, but that instead enables a quick and intuitive representation of reasonably large 3D datasets. PMID:23758781

  12. Image based 3D city modeling : Comparative study

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  13. Potential of 3D City Models to assess flood vulnerability

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Bochow, Mathias; Schüttig, Martin; Nagel, Claus; Ross, Lutz; Kreibich, Heidi

    2016-04-01

    Vulnerability, as the product of exposure and susceptibility, is a key factor of the flood risk equation. Furthermore, the estimation of flood loss is very sensitive to the choice of the vulnerability model. Still, in contrast to elaborate hazard simulations, vulnerability is often considered in a simplified manner concerning the spatial resolution and geo-location of exposed objects as well as the susceptibility of these objects at risk. Usually, area specific potential flood loss is quantified on the level of aggregated land-use classes, and both hazard intensity and resistance characteristics of affected objects are represented in highly simplified terms. We investigate the potential of 3D City Models and spatial features derived from remote sensing data to improve the differentiation of vulnerability in flood risk assessment. 3D City Models are based on CityGML, an application scheme of the Geography Markup Language (GML), which represents the 3D geometry, 3D topology, semantics and appearance of objects on different levels of detail. As such, 3D City Models offer detailed spatial information which is useful to describe the exposure and to characterize the susceptibility of residential buildings at risk. This information is further consolidated with spatial features of the building stock derived from remote sensing data. Using this database a spatially detailed flood vulnerability model is developed by means of data-mining. Empirical flood damage data are used to derive and to validate flood susceptibility models for individual objects. We present first results from a prototype application in the city of Dresden, Germany. The vulnerability modeling based on 3D City Models and remote sensing data is compared i) to the generally accepted good engineering practice based on area specific loss potential and ii) to a highly detailed representation of flood vulnerability based on a building typology using urban structure types. Comparisons are drawn in terms of

  14. Mapping the human cerebral cortex using 3-D medial manifolds

    NASA Astrophysics Data System (ADS)

    Szekely, Gabor; Brechbuehler, Christian; Kuebler, Olaf; Ogniewicz, Robert; Budinger, Thomas F.

    1992-09-01

    Novel imaging technologies provide a detailed look at structure and function of the tremendously complex and variable human brain. Optimal exploitation of the information stored in the rapidly growing collection of acquired and segmented MRI data calls for robust and reliable descriptions of the individual geometry of the cerebral cortex. A mathematical description and representation of 3-D shape, capable of dealing with form of variable appearance, is at the focus of this paper. We base our development on the Medial Axis Transformation (MAT) customarily defined in 2-D although the concept generalizes to any number of dimensions. Our implementation of the 3-D MAT combines full 3-D Voronoitesselation generated by the set of all border points with regularization procedures to obtain geometrically and topologically correct medial manifolds. The proposed algorithm was tested on synthetic objects and has been applied to 3-D MRI data of 1 mm isotropic resolution to obtain a description of the sulci in the cerebral cortex. Description and representation of the cortical anatomy is significant in clinical applications, medical research, and instrumentation developments.

  15. HipMatch: an object-oriented cross-platform program for accurate determination of cup orientation using 2D-3D registration of single standard X-ray radiograph and a CT volume.

    PubMed

    Zheng, Guoyan; Zhang, Xuan; Steppacher, Simon D; Murphy, Stephen B; Siebenrock, Klaus A; Tannast, Moritz

    2009-09-01

    The widely used procedure of evaluation of cup orientation following total hip arthroplasty using single standard anteroposterior (AP) radiograph is known inaccurate, largely due to the wide variability in individual pelvic orientation relative to X-ray plate. 2D-3D image registration methods have been introduced for an accurate determination of the post-operative cup alignment with respect to an anatomical reference extracted from the CT data. Although encouraging results have been reported, their extensive usage in clinical routine is still limited. This may be explained by their requirement of a CAD model of the prosthesis, which is often difficult to be organized from the manufacturer due to the proprietary issue, and by their requirement of either multiple radiographs or a radiograph-specific calibration, both of which are not available for most retrospective studies. To address these issues, we developed and validated an object-oriented cross-platform program called "HipMatch" where a hybrid 2D-3D registration scheme combining an iterative landmark-to-ray registration with a 2D-3D intensity-based registration was implemented to estimate a rigid transformation between a pre-operative CT volume and the post-operative X-ray radiograph for a precise estimation of cup alignment. No CAD model of the prosthesis is required. Quantitative and qualitative results evaluated on cadaveric and clinical datasets are given, which indicate the robustness and the accuracy of the program. HipMatch is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway), VTK, and Coin3D and is transportable to any platform. PMID:19328585

  16. Imaging a Sustainable Future in 3D

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  17. Transfer of Learning between 2D and 3D Sources during Infancy: Informing Theory and Practice

    ERIC Educational Resources Information Center

    Barr, Rachel

    2010-01-01

    The ability to transfer learning across contexts is an adaptive skill that develops rapidly during early childhood. Learning from television is a specific instance of transfer of learning between a two-dimensional (2D) representation and a three-dimensional (3D) object. Understanding the conditions under which young children might accomplish this…

  18. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  19. [Real time 3D echocardiography].

    PubMed

    Bauer, F; Shiota, T; Thomas, J D

    2001-07-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients. PMID:11494630

  20. On the road to invariant object recognition: how cortical area V2 transforms absolute to relative disparity during 3D vision.

    PubMed

    Grossberg, Stephen; Srinivasan, Karthik; Yazdanbakhsh, Arash

    2011-09-01

    Invariant recognition of objects depends on a hierarchy of cortical stages that build invariance gradually. Binocular disparity computations are a key part of this transformation. Cortical area V1 computes absolute disparity, which is the horizontal difference in retinal location of an image in the left and right foveas. Many cells in cortical area V2 compute relative disparity, which is the difference in absolute disparity of two visible features. Relative, but not absolute, disparity is invariant under both a disparity change across a scene and vergence eye movements. A neural network model is introduced which predicts that shunting lateral inhibition of disparity-sensitive layer 4 cells in V2 causes a peak shift in cell responses that transforms absolute disparity from V1 into relative disparity in V2. This inhibitory circuit has previously been implicated in contrast gain control, divisive normalization, selection of perceptual groupings, and attentional focusing. The model hereby links relative disparity to other visual functions and thereby suggests new ways to test its mechanistic basis. Other brain circuits are reviewed wherein lateral inhibition causes a peak shift that influences behavioral responses.

  1. Constructing topologically connected surfaces for the comprehensive analysis of 3-D medical structures

    NASA Astrophysics Data System (ADS)

    Kalvin, Alan D.; Cutting, Court B.; Haddad, Betsy; Noz, Marilyn E.

    1991-06-01

    Three-dimensional (3D) medical imaging deals with the visualization, manipulation, and measuring of objects in 3D medical images. So far, research efforts have concentrated primarily on visualization, using well-developed methods from computer graphics. Very little has been achieved in developing techniques for manipulating medical objects, or for extracting quantitative measurements from them beyond volume calculation (by counting voxels), and computing distances and angles between manually located surface points. A major reason for the slow pace in the development of manipulation and quantification methods lies with the limitations of current algorithms for constructing surfaces from 3D solid objects. We show that current surface construction algorithms either (a) do not construct valid surface descriptions of solid objects or (b) produce surface representations that are not particularly suitable for anything other than visualization. We present ALLIGATOR, a new surface construction algorithm that produces valid, topologically connected surface representations of solid objects. We have developed a modeling system based on the surface representations created by ALLIGATOR that is suitable for developing algorithms to visualize, manipulate, and quantify 3D medical objects. Using this modeling system we have developed a method for efficiently computing principle curvatures and directions on surfaces. These measurements form the basis for a new metric system being developed for morphometrics. The modeling system is also being used in the development of systems for quantitative pre-surgical planning and surgical augmentation.

  2. Hough transform-based 3D mesh retrieval

    NASA Astrophysics Data System (ADS)

    Zaharia, Titus; Preteux, Francoise J.

    2001-11-01

    This papre addresses the issue of 3D mesh indexation by using shape descriptors (SDs) under constraints of geometric and topological invariance. A new shape descriptor, the Optimized 3D Hough Transform Descriptor (O3HTD) is here proposed. Intrinsically topologically stable, the O3DHTD is not invariant to geometric transformations. Nevertheless, we show mathematically how the O3DHTD can be optimally associated (in terms of compactness of representation