Science.gov

Sample records for 3d face representations

  1. 3D face recognition based on multiple keypoint descriptors and sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  2. 3D Face Recognition Based on Multiple Keypoint Descriptors and Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  3. SNL3dFace

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial featuresmore » of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.« less

  4. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  5. 3D face analysis for demographic biometrics

    SciTech Connect

    Tokola, Ryan A; Mikkilineni, Aravind K; Boehnen, Chris Bensing

    2015-01-01

    Despite being increasingly easy to acquire, 3D data is rarely used for face-based biometrics applications beyond identification. Recent work in image-based demographic biometrics has enjoyed much success, but these approaches suffer from the well-known limitations of 2D representations, particularly variations in illumination, texture, and pose, as well as a fundamental inability to describe 3D shape. This paper shows that simple 3D shape features in a face-based coordinate system are capable of representing many biometric attributes without problem-specific models or specialized domain knowledge. The same feature vector achieves impressive results for problems as diverse as age estimation, gender classification, and race classification.

  6. Virtual Representations in 3D Learning Environments

    ERIC Educational Resources Information Center

    Shonfeld, Miri; Kritz, Miki

    2013-01-01

    This research explores the extent to which virtual worlds can serve as online collaborative learning environments for students by increasing social presence and engagement. 3D environments enable learning, which simulates face-to-face encounters while retaining the advantages of online learning. Students in Education departments created avatars…

  7. Fabricating 3D figurines with personalized faces.

    PubMed

    Tena, J Rafael; Mahler, Moshe; Beeler, Thabo; Grosse, Max; Hengchin Yeh; Matthews, Iain

    2013-01-01

    We present a semi-automated system for fabricating figurines with faces that are personalised to the individual likeness of the customer. The efficacy of the system has been demonstrated by commercial deployments at Walt Disney World Resort and Star Wars Celebration VI in Orlando Florida. Although the system is semi automated, human intervention is limited to a few simple tasks to maintain the high throughput and consistent quality required for commercial application. In contrast to existing systems that fabricate custom heads that are assembled to pre-fabricated plastic bodies, our system seamlessly integrates 3D facial data with a predefined figurine body into a unique and continuous object that is fabricated as a single piece. The combination of state-of-the-art 3D capture, modelling, and printing that are the core of our system provide the flexibility to fabricate figurines whose complexity is only limited by the creativity of the designer.

  8. Fabricating 3D figurines with personalized faces.

    PubMed

    Tena, J Rafael; Mahler, Moshe; Beeler, Thabo; Grosse, Max; Hengchin Yeh; Matthews, Iain

    2013-01-01

    We present a semi-automated system for fabricating figurines with faces that are personalised to the individual likeness of the customer. The efficacy of the system has been demonstrated by commercial deployments at Walt Disney World Resort and Star Wars Celebration VI in Orlando Florida. Although the system is semi automated, human intervention is limited to a few simple tasks to maintain the high throughput and consistent quality required for commercial application. In contrast to existing systems that fabricate custom heads that are assembled to pre-fabricated plastic bodies, our system seamlessly integrates 3D facial data with a predefined figurine body into a unique and continuous object that is fabricated as a single piece. The combination of state-of-the-art 3D capture, modelling, and printing that are the core of our system provide the flexibility to fabricate figurines whose complexity is only limited by the creativity of the designer. PMID:24808129

  9. Robust 3D face recognition by local shape difference boosting.

    PubMed

    Wang, Yueming; Liu, Jianzhuang; Tang, Xiaoou

    2010-10-01

    This paper proposes a new 3D face recognition approach, Collective Shape Difference Classifier (CSDC), to meet practical application requirements, i.e., high recognition performance, high computational efficiency, and easy implementation. We first present a fast posture alignment method which is self-dependent and avoids the registration between an input face against every face in the gallery. Then, a Signed Shape Difference Map (SSDM) is computed between two aligned 3D faces as a mediate representation for the shape comparison. Based on the SSDMs, three kinds of features are used to encode both the local similarity and the change characteristics between facial shapes. The most discriminative local features are selected optimally by boosting and trained as weak classifiers for assembling three collective strong classifiers, namely, CSDCs with respect to the three kinds of features. Different schemes are designed for verification and identification to pursue high performance in both recognition and computation. The experiments, carried out on FRGC v2 with the standard protocol, yield three verification rates all better than 97.9 percent with the FAR of 0.1 percent and rank-1 recognition rates above 98 percent. Each recognition against a gallery with 1,000 faces only takes about 3.6 seconds. These experimental results demonstrate that our algorithm is not only effective but also time efficient. PMID:20724762

  10. Random-profiles-based 3D face recognition system.

    PubMed

    Kim, Joongrock; Yu, Sunjin; Lee, Sangyoun

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation.

  11. Random-Profiles-Based 3D Face Recognition System

    PubMed Central

    Joongrock, Kim; Sunjin, Yu; Sangyoun, Lee

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation. PMID:24691101

  12. 3D Modeling Engine Representation Summary Report

    SciTech Connect

    Steven Prescott; Ramprasad Sampath; Curtis Smith; Timothy Yang

    2014-09-01

    Computers have been used for 3D modeling and simulation, but only recently have computational resources been able to give realistic results in a reasonable time frame for large complex models. This summary report addressed the methods, techniques, and resources used to develop a 3D modeling engine to represent risk analysis simulation for advanced small modular reactor structures and components. The simulations done for this evaluation were focused on external events, specifically tsunami floods, for a hypothetical nuclear power facility on a coastline.

  13. Stable face representations

    PubMed Central

    Jenkins, Rob; Burton, A. Mike

    2011-01-01

    Photographs are often used to establish the identity of an individual or to verify that they are who they claim to be. Yet, recent research shows that it is surprisingly difficult to match a photo to a face. Neither humans nor machines can perform this task reliably. Although human perceivers are good at matching familiar faces, performance with unfamiliar faces is strikingly poor. The situation is no better for automatic face recognition systems. In practical settings, automatic systems have been consistently disappointing. In this review, we suggest that failure to distinguish between familiar and unfamiliar face processing has led to unrealistic expectations about face identification in applied settings. We also argue that a photograph is not necessarily a reliable indicator of facial appearance, and develop our proposal that summary statistics can provide more stable face representations. In particular, we show that image averaging stabilizes facial appearance by diluting aspects of the image that vary between snapshots of the same person. We review evidence that the resulting images can outperform photographs in both behavioural experiments and computer simulations, and outline promising directions for future research. PMID:21536553

  14. 3D Face modeling using the multi-deformable method.

    PubMed

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-01-01

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976

  15. 3D Face Modeling Using the Multi-Deformable Method

    PubMed Central

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-01-01

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976

  16. 3D-dynamic representation of DNA sequences.

    PubMed

    Wąż, Piotr; Bielińska-Wąż, Dorota

    2014-03-01

    A new 3D graphical representation of DNA sequences is introduced. This representation is called 3D-dynamic representation. It is a generalization of the 2D-dynamic dynamic representation. The sequences are represented by sets of "material points" in the 3D space. The resulting 3D-dynamic graphs are treated as rigid bodies. The descriptors characterizing the graphs are analogous to the ones used in the classical dynamics. The classification diagrams derived from this representation are presented and discussed. Due to the third dimension, "the history of the graph" can be recognized graphically because the 3D-dynamic graph does not overlap with itself. Specific parts of the graphs correspond to specific parts of the sequence. This feature is essential for graphical comparisons of the sequences. Numerically, both 2D and 3D approaches are of high quality. In particular, a difference in a single base between two sequences can be identified and correctly described (one can identify which base) by both 2D and 3D methods. PMID:24567158

  17. 3D fast wavelet network model-assisted 3D face recognition

    NASA Astrophysics Data System (ADS)

    Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2015-12-01

    In last years, the emergence of 3D shape in face recognition is due to its robustness to pose and illumination changes. These attractive benefits are not all the challenges to achieve satisfactory recognition rate. Other challenges such as facial expressions and computing time of matching algorithms remain to be explored. In this context, we propose our 3D face recognition approach using 3D wavelet networks. Our approach contains two stages: learning stage and recognition stage. For the training we propose a novel algorithm based on 3D fast wavelet transform. From 3D coordinates of the face (x,y,z), we proceed to voxelization to get a 3D volume which will be decomposed by 3D fast wavelet transform and modeled after that with a wavelet network, then their associated weights are considered as vector features to represent each training face . For the recognition stage, an unknown identity face is projected on all the training WN to obtain a new vector features after every projection. A similarity score is computed between the old and the obtained vector features. To show the efficiency of our approach, experimental results were performed on all the FRGC v.2 benchmark.

  18. 3D face recognition based on matching of facial surfaces

    NASA Astrophysics Data System (ADS)

    Echeagaray-Patrón, Beatriz A.; Kober, Vitaly

    2015-09-01

    Face recognition is an important task in pattern recognition and computer vision. In this work a method for 3D face recognition in the presence of facial expression and poses variations is proposed. The method uses 3D shape data without color or texture information. A new matching algorithm based on conformal mapping of original facial surfaces onto a Riemannian manifold followed by comparison of conformal and isometric invariants computed in the manifold is suggested. Experimental results are presented using common 3D face databases that contain significant amount of expression and pose variations.

  19. Formal representation of 3D structural geological models

    NASA Astrophysics Data System (ADS)

    Wang, Zhangang; Qu, Honggang; Wu, Zixing; Yang, Hongjun; Du, Qunle

    2016-05-01

    The development and widespread application of geological modeling methods has increased demands for the integration and sharing services of three dimensional (3D) geological data. However, theoretical research in the field of geological information sciences is limited despite the widespread use of Geographic Information Systems (GIS) in geology. In particular, fundamental research on the formal representations and standardized spatial descriptions of 3D structural models is required. This is necessary for accurate understanding and further applications of geological data in 3D space. In this paper, we propose a formal representation method for 3D structural models using the theory of point set topology, which produces a mathematical definition for the major types of geological objects. The spatial relationships between geologic boundaries, structures, and units are explained in detail using the 9-intersection model. Reasonable conditions for describing the topological space of 3D structural models are also provided. The results from this study can be used as potential support for the standardized representation and spatial quality evaluation of 3D structural models, as well as for specific needs related to model-based management, query, and analysis.

  20. 3D ear identification based on sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person's identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm.

  1. 3D face recognition under expressions, occlusions, and pose variations.

    PubMed

    Drira, Hassen; Ben Amor, Boulbaba; Srivastava, Anuj; Daoudi, Mohamed; Slama, Rim

    2013-09-01

    We propose a novel geometric framework for analyzing 3D faces, with the specific goals of comparing, matching, and averaging their shapes. Here we represent facial surfaces by radial curves emanating from the nose tips and use elastic shape analysis of these curves to develop a Riemannian framework for analyzing shapes of full facial surfaces. This representation, along with the elastic Riemannian metric, seems natural for measuring facial deformations and is robust to challenges such as large facial expressions (especially those with open mouths), large pose variations, missing parts, and partial occlusions due to glasses, hair, and so on. This framework is shown to be promising from both--empirical and theoretical--perspectives. In terms of the empirical evaluation, our results match or improve upon the state-of-the-art methods on three prominent databases: FRGCv2, GavabDB, and Bosphorus, each posing a different type of challenge. From a theoretical perspective, this framework allows for formal statistical inferences, such as the estimation of missing facial parts using PCA on tangent spaces and computing average shapes.

  2. 3D ear identification based on sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person's identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm. PMID:24740247

  3. 3D Ear Identification Based on Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person’s identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm. PMID:24740247

  4. IR Fringe Projection for 3D Face Recognition

    NASA Astrophysics Data System (ADS)

    Spagnolo, Giuseppe Schirripa; Cozzella, Lorenzo; Simonetti, Carla

    2010-04-01

    Facial recognitions of people can be used for the identification of individuals, or can serve as verification e.g. for access controls. The process requires that the facial data is captured and then compared with stored reference data. Different from traditional methods which use 2D images to recognize human faces, this article shows a known shape extraction methodology applied to the extraction of 3D human faces conjugated with a non conventional optical system able to work in ``invisible'' way. The proposed method is experimentally simple, and it has a low-cost set-up.

  5. Developing Spatial Reasoning Through 3D Representations of the Universe

    NASA Astrophysics Data System (ADS)

    Summers, F.; Eisenhamer, B.; McCallister, D.

    2013-12-01

    Mental models of astronomical objects are often greatly hampered by the flat two-dimensional representation of pictures from telescopes. Lacking experience with the true structures in much of the imagery, there is no basis for anything but the default interpretation of a picture postcard. Using astronomical data and scientific visualizations, our professional development session allows teachers and their students to develop their spatial reasoning while forming more accurate and richer mental models. Examples employed in this session include star positions and constellations, morphologies of both normal and interacting galaxies, shapes of planetary nebulae, and three dimensional structures in star forming regions. Participants examine, imagine, predict, and confront the 3D interpretation of well-known 2D imagery using authentic data from NASA, the Hubble Space Telescope, and other scientific sources. The session's cross-disciplinary nature includes science, math, and artistic reasoning while addressing common cosmic misconceptions. Stars of the Orion Constellation seen in 3D explodes the popular misconception that stars in a constellation are all at the same distance. A scientific visualization of two galaxies colliding provides a 3D comparison for Hubble images of interacting galaxies.

  6. Objective 3D face recognition: Evolution, approaches and challenges.

    PubMed

    Smeets, Dirk; Claes, Peter; Vandermeulen, Dirk; Clement, John Gerald

    2010-09-10

    Face recognition is a natural human ability and a widely accepted identification and authentication method. In modern legal settings, a lot of credence is placed on identifications made by eyewitnesses. Consequently these are based on human perception which is often flawed and can lead to situations where identity is disputed. Therefore, there is a clear need to secure identifications in an objective way based on anthropometric measures. Anthropometry has existed for many years and has evolved with each advent of new technology and computing power. As a result of this, face recognition methodology has shifted from a purely 2D image-based approach to the use of 3D facial shape. However, one of the main challenges still remaining is the non-rigid structure of the face, which can change permanently over varying time-scales and briefly with facial expressions. The majority of face recognition methods have been developed by scientists with a very technical background such as biometry, pattern recognition and computer vision. This article strives to bridge the gap between these communities and the forensic science end-users. A concise review of face recognition using 3D shape is given. Methods using 3D shape applied to data embodying facial expressions are tabulated for reference. From this list a categorization of different strategies to deal with expressions is presented. The underlying concepts and practical issues relating to the application of each strategy are given, without going into technical details. The discussion clearly articulates the justification to establish archival, reference databases to compare and evaluate different strategies. PMID:20395086

  7. Objective 3D face recognition: Evolution, approaches and challenges.

    PubMed

    Smeets, Dirk; Claes, Peter; Vandermeulen, Dirk; Clement, John Gerald

    2010-09-10

    Face recognition is a natural human ability and a widely accepted identification and authentication method. In modern legal settings, a lot of credence is placed on identifications made by eyewitnesses. Consequently these are based on human perception which is often flawed and can lead to situations where identity is disputed. Therefore, there is a clear need to secure identifications in an objective way based on anthropometric measures. Anthropometry has existed for many years and has evolved with each advent of new technology and computing power. As a result of this, face recognition methodology has shifted from a purely 2D image-based approach to the use of 3D facial shape. However, one of the main challenges still remaining is the non-rigid structure of the face, which can change permanently over varying time-scales and briefly with facial expressions. The majority of face recognition methods have been developed by scientists with a very technical background such as biometry, pattern recognition and computer vision. This article strives to bridge the gap between these communities and the forensic science end-users. A concise review of face recognition using 3D shape is given. Methods using 3D shape applied to data embodying facial expressions are tabulated for reference. From this list a categorization of different strategies to deal with expressions is presented. The underlying concepts and practical issues relating to the application of each strategy are given, without going into technical details. The discussion clearly articulates the justification to establish archival, reference databases to compare and evaluate different strategies.

  8. Modeling 3D faces from samplings via compressive sensing

    NASA Astrophysics Data System (ADS)

    Sun, Qi; Tang, Yanlong; Hu, Ping

    2013-07-01

    3D data is easier to acquire for family entertainment purpose today because of the mass-production, cheapness and portability of domestic RGBD sensors, e.g., Microsoft Kinect. However, the accuracy of facial modeling is affected by the roughness and instability of the raw input data from such sensors. To overcome this problem, we introduce compressive sensing (CS) method to build a novel 3D super-resolution scheme to reconstruct high-resolution facial models from rough samples captured by Kinect. Unlike the simple frame fusion super-resolution method, this approach aims to acquire compressed samples for storage before a high-resolution image is produced. In this scheme, depth frames are firstly captured and then each of them is measured into compressed samples using sparse coding. Next, the samples are fused to produce an optimal one and finally a high-resolution image is recovered from the fused sample. This framework is able to recover 3D facial model of a given user from compressed simples and this can reducing storage space as well as measurement cost in future devices e.g., single-pixel depth cameras. Hence, this work can potentially be applied into future applications, such as access control system using face recognition, and smart phones with depth cameras, which need high resolution and little measure time.

  9. Inverse rendering of faces with a 3D morphable model.

    PubMed

    Aldrian, Oswald; Smith, William A P

    2013-05-01

    In this paper, we present a complete framework to inverse render faces with a 3D Morphable Model (3DMM). By decomposing the image formation process into geometric and photometric parts, we are able to state the problem as a multilinear system which can be solved accurately and efficiently. As we treat each contribution as independent, the objective function is convex in the parameters and a global solution is guaranteed. We start by recovering 3D shape using a novel algorithm which incorporates generalization error of the model obtained from empirical measurements. We then describe two methods to recover facial texture, diffuse lighting, specular reflectance, and camera properties from a single image. The methods make increasingly weak assumptions and can be solved in a linear fashion. We evaluate our findings on a publicly available database, where we are able to outperform an existing state-of-the-art algorithm. We demonstrate the usability of the recovered parameters in a recognition experiment conducted on the CMU-PIE database. PMID:23520253

  10. A prescreener for 3D face recognition using radial symmerty and the Hausdorff fraction.

    SciTech Connect

    Koudelka, Melissa L.; Koch, Mark William; Russ, Trina Denise

    2005-04-01

    Face recognition systems require the ability to efficiently scan an existing database of faces to locate a match for a newly acquired face. The large number of faces in real world databases makes computationally intensive algorithms impractical for scanning entire databases. We propose the use of more efficient algorithms to 'prescreen' face databases, determining a limited set of likely matches that can be processed further to identify a match. We use both radial symmetry and shape to extract five features of interest on 3D range images of faces. These facial features determine a very small subset of discriminating points which serve as input to a prescreening algorithm based on a Hausdorff fraction. We show how to compute the Haudorff fraction in linear O(n) time using a range image representation. Our feature extraction and prescreening algorithms are verified using the FRGC v1.0 3D face scan data. Results show 97% of the extracted facial features are within 10 mm or less of manually marked ground truth, and the prescreener has a rank 6 recognition rate of 100%.

  11. The Fermion Representation of Quantum Toroidal Algebra on 3D Young Diagrams

    NASA Astrophysics Data System (ADS)

    Cai, Li-Qiang; Wang, Li-Fang; Wu, Ke; Yang, Jie

    2014-07-01

    We develop an equivalence between the diagonal slices and the perpendicular slices of 3D Young diagrams via Maya diagrams. Furthermore, we construct the fermion representation of quantum toroidal algebra on the 3D Young diagrams perpendicularly sliced.

  12. Creating 3D realistic head: from two orthogonal photos to multiview face contents

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Lin, Qian; Tang, Feng; Tang, Liang; Lim, Sukhwan; Wang, Shengjin

    2011-03-01

    3D Head models have many applications, such as virtual conference, 3D web game, and so on. The existing several web-based face modeling solutions that can create a 3D face model from one or two user uploaded face images, are limited to generating the 3D model of only face region. The accuracy of such reconstruction is very limited for side views, as well as hair regions. The goal of our research is to develop a framework for reconstructing the realistic 3D human head based on two approximate orthogonal views. Our framework takes two images, and goes through segmentation, feature points detection, 3D bald head reconstruction, 3D hair reconstruction and texture mapping to create a 3D head model. The main contribution of the paper is that the processing steps are applies to both the face region as well as the hair region.

  13. Consistent representations of and conversions between 3D rotations

    NASA Astrophysics Data System (ADS)

    Rowenhorst, D.; Rollett, A. D.; Rohrer, G. S.; Groeber, M.; Jackson, M.; Konijnenberg, P. J.; De Graef, M.

    2015-12-01

    In materials science the orientation of a crystal lattice is described by means of a rotation relative to an external reference frame. A number of rotation representations are in use, including Euler angles, rotation matrices, unit quaternions, Rodrigues-Frank vectors and homochoric vectors. Each representation has distinct advantages and disadvantages with respect to the ease of use for calculations and data visualization. It is therefore convenient to be able to easily convert from one representation to another. However, historically, each representation has been implemented using a set of often tacit conventions; separate research groups would implement different sets of conventions, thereby making the comparison of methods and results difficult and confusing. This tutorial article aims to resolve these ambiguities and provide a consistent set of conventions and conversions between common rotational representations, complete with worked examples and a discussion of the trade-offs necessary to resolve all ambiguities. Additionally, an open source Fortran-90 library of conversion routines for the different representations is made available to the community.

  14. Multiscale 3-D Shape Representation and Segmentation Using Spherical Wavelets

    PubMed Central

    Nain, Delphine; Haker, Steven; Bobick, Aaron

    2013-01-01

    This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of

  15. Multiscale 3-D shape representation and segmentation using spherical wavelets.

    PubMed

    Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen

    2007-04-01

    This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of

  16. Scalable illumination robust face identification using harmonic representation

    NASA Astrophysics Data System (ADS)

    Xia, Cong; Chen, Jiansheng; Yang, Chang; Wang, Jing; Liu, Jing; Su, Guangda; Zhang, Gang

    2013-07-01

    Evaluations of both academic face recognition algorithms and commercial systems have shown that the recognition performance degrades significantly due to the variation of illumination. Previous methods for illumination robust face recognition usually involve computationally expensive 3D model transformations or optimization base reconstruction using multiple gallery face images, making them infeasible in practical large scale face identification applications. In this paper, we propose an alternative face identification framework, in which one image per person is used for enrollment as is commonly practiced in real life applications. Several probe images captured under different illumination conditions are synthesized to imitate the illumination condition of the enrolled gallery face image. We assume Lambertian reflectance of human faces and use the harmonic representations of lighting. We demonstrate satisfactory performance on the Yale B database, both visually and quantitatively. The proposed method is of very low complexity when linear facial feature are used, and is therefore scalable for large scale applications.

  17. Eye Tracking to Explore the Impacts of Photorealistic 3d Representations in Pedstrian Navigation Performance

    NASA Astrophysics Data System (ADS)

    Dong, Weihua; Liao, Hua

    2016-06-01

    Despite the now-ubiquitous two-dimensional (2D) maps, photorealistic three-dimensional (3D) representations of cities (e.g., Google Earth) have gained much attention by scientists and public users as another option. However, there is no consistent evidence on the influences of 3D photorealism on pedestrian navigation. Whether 3D photorealism can communicate cartographic information for navigation with higher effectiveness and efficiency and lower cognitive workload compared to the traditional symbolic 2D maps remains unknown. This study aims to explore whether the photorealistic 3D representation can facilitate processes of map reading and navigation in digital environments using a lab-based eye tracking approach. Here we show the differences of symbolic 2D maps versus photorealistic 3D representations depending on users' eye-movement and navigation behaviour data. We found that the participants using the 3D representation were less effective, less efficient and were required higher cognitive workload than using the 2D map for map reading. However, participants using the 3D representation performed more efficiently in self-localization and orientation at the complex decision points. The empirical results can be helpful to improve the usability of pedestrian navigation maps in future designs.

  18. Challenges Facing 3-D Audio Display Design for Multimedia

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    The challenges facing successful multimedia presentation depend largely on the expectations of the designer and end user for a given application. Perceptual limitations in distance, elevation and azimuth sound source simulation differ significantly between headphone and cross-talk cancellation loudspeaker listening and therefore must be considered. Simulation of an environmental context is desirable but the quality depends on processing resources and lack of interaction with the host acoustical environment. While techniques such as data reduction of head-related transfer functions have been used widely to improve simulation fidelity, another approach involves determining thresholds for environmental acoustic events. Psychoacoustic studies relevant to this approach are reviewed in consideration of multimedia applications

  19. Are face representations depth cue invariant?

    PubMed

    Dehmoobadsharifabadi, Armita; Farivar, Reza

    2016-06-01

    The visual system can process three-dimensional depth cues defining surfaces of objects, but it is unclear whether such information contributes to complex object recognition, including face recognition. The processing of different depth cues involves both dorsal and ventral visual pathways. We investigated whether facial surfaces defined by individual depth cues resulted in meaningful face representations-representations that maintain the relationship between the population of faces as defined in a multidimensional face space. We measured face identity aftereffects for facial surfaces defined by individual depth cues (Experiments 1 and 2) and tested whether the aftereffect transfers across depth cues (Experiments 3 and 4). Facial surfaces and their morphs to the average face were defined purely by one of shading, texture, motion, or binocular disparity. We obtained identification thresholds for matched (matched identity between adapting and test stimuli), non-matched (non-matched identity between adapting and test stimuli), and no-adaptation (showing only the test stimuli) conditions for each cue and across different depth cues. We found robust face identity aftereffect in both experiments. Our results suggest that depth cues do contribute to forming meaningful face representations that are depth cue invariant. Depth cue invariance would require integration of information across different areas and different pathways for object recognition, and this in turn has important implications for cortical models of visual object recognition. PMID:27271993

  20. The Role of Active Exploration of 3D Face Stimuli on Recognition Memory of Facial Information

    ERIC Educational Resources Information Center

    Liu, Chang Hong; Ward, James; Markall, Helena

    2007-01-01

    Research on face recognition has mainly relied on methods in which observers are relatively passive viewers of face stimuli. This study investigated whether active exploration of three-dimensional (3D) face stimuli could facilitate recognition memory. A standard recognition task and a sequential matching task were employed in a yoked design.…

  1. Robust kernel collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong

    2015-05-01

    One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.

  2. Geoinformation techniques for the 3D visualisation of historic buildings and representation of a building's pathology

    NASA Astrophysics Data System (ADS)

    Tsilimantou, Elisavet; Delegou, Ekaterini; Ioannidis, Charalabos; Moropoulou, Antonia

    2016-08-01

    In this paper, the documentation of an historic building registered as Cultural Heritage asset is presented. The aim of the survey is to create a 3D geometric representation of a historic building and in accordance with multidisciplinary study extract useful information regarding the extent of degradation, constructions' durability etc. For the implementation of the survey, a combination of different types of acquisition technologies is used. The project focuses on the study of Villa Klonaridi, in Athens, Greece. For the complete documentation of the building, conventional topography, photogrammetric and laser scanning techniques is combined. Close range photogrammetric techniques are used for the acquisition of the façades and architectural details. One of the main objectives is the development of an accurate 3D model, where the photorealistic representation of the building is achieved, along with the decay pathology, historical phases and architectural components. In order to achieve a suitable graphical representation for the study of the material and decay patterns beyond the 2D representation, 3D modelling and additional information modelling is performed for comparative analysis. The study provides various conclusions regarding the scale of deterioration obtained by the 2D and 3D analysis respectively. Considering the variation in material and decay patterns, comparative results are obtained regarding the degradation of the building. Overall, the paper describes a process performed on a Historic Building, where the 3D digital acquisition of the monuments' structure is realized with the combination of close range surveying and laser scanning methods.

  3. A framework for the recognition of 3D faces and expressions

    NASA Astrophysics Data System (ADS)

    Li, Chao; Barreto, Armando

    2006-04-01

    Face recognition technology has been a focus both in academia and industry for the last couple of years because of its wide potential applications and its importance to meet the security needs of today's world. Most of the systems developed are based on 2D face recognition technology, which uses pictures for data processing. With the development of 3D imaging technology, 3D face recognition emerges as an alternative to overcome the difficulties inherent with 2D face recognition, i.e. sensitivity to illumination conditions and orientation positioning of the subject. But 3D face recognition still needs to tackle the problem of deformation of facial geometry that results from the expression changes of a subject. To deal with this issue, a 3D face recognition framework is proposed in this paper. It is composed of three subsystems: an expression recognition system, a system for the identification of faces with expression, and neutral face recognition system. A system for the recognition of faces with one type of expression (happiness) and neutral faces was implemented and tested on a database of 30 subjects. The results proved the feasibility of this framework.

  4. Incremental learning of 3D-DCT compact representations for robust visual tracking.

    PubMed

    Li, Xi; Dick, Anthony; Shen, Chunhua; van den Hengel, Anton; Wang, Hanzi

    2013-04-01

    Visual tracking usually requires an object appearance model that is robust to changing illumination, pose, and other factors encountered in video. Many recent trackers utilize appearance samples in previous frames to form the bases upon which the object appearance model is built. This approach has the following limitations: 1) The bases are data driven, so they can be easily corrupted, and 2) it is difficult to robustly update the bases in challenging situations. In this paper, we construct an appearance model using the 3D discrete cosine transform (3D-DCT). The 3D-DCT is based on a set of cosine basis functions which are determined by the dimensions of the 3D signal and thus independent of the input video data. In addition, the 3D-DCT can generate a compact energy spectrum whose high-frequency coefficients are sparse if the appearance samples are similar. By discarding these high-frequency coefficients, we simultaneously obtain a compact 3D-DCT-based object representation and a signal reconstruction-based similarity measure (reflecting the information loss from signal reconstruction). To efficiently update the object representation, we propose an incremental 3D-DCT algorithm which decomposes the 3D-DCT into successive operations of the 2D discrete cosine transform (2D-DCT) and 1D discrete cosine transform (1D-DCT) on the input video data. As a result, the incremental 3D-DCT algorithm only needs to compute the 2D-DCT for newly added frames as well as the 1D-DCT along the third dimension, which significantly reduces the computational complexity. Based on this incremental 3D-DCT algorithm, we design a discriminative criterion to evaluate the likelihood of a test sample belonging to the foreground object. We then embed the discriminative criterion into a particle filtering framework for object state inference over time. Experimental results demonstrate the effectiveness and robustness of the proposed tracker.

  5. A Novel Multi-Purpose Matching Representation of Local 3D Surfaces: A Rotationally Invariant, Efficient, and Highly Discriminative Approach With an Adjustable Sensitivity.

    PubMed

    Al-Osaimi, Faisal R

    2016-02-01

    In this paper, a novel approach to local 3D surface matching representation suitable for a range of 3D vision applications is introduced. Local 3D surface patches around key points on the 3D surface are represented by 2D images such that the representing 2D images enjoy certain characteristics which positively impact the matching accuracy, robustness, and speed. First, the proposed representation is complete, in the sense, there is no information loss during their computation. Second, the 3DoF 2D representations are strictly invariant to all the 3DoF rotations. To optimally avail surface information, the sensitivity of the representations to surface information is adjustable. This also provides the proposed matching representation with the means to optimally adjust to a particular class of problems/applications or an acquisition technology. Each 2D matching representation is a sequence of adjustable integral kernels, where each kernel is efficiently computed from a triple of precise 3D curves (profiles) formed by intersecting three concentric spheres with the 3D surface. Robust techniques for sampling the profiles and establishing correspondences among them were devised. Based on the proposed matching representation, two techniques for the detection of key points were presented. The first is suitable for static images, while the second is suitable for 3D videos. The approach was tested on the face recognition grand challenge v2.0, the 3D twins expression challenge, and the Bosphorus data sets, and a superior face recognition performance was achieved. In addition, the proposed approach was used in object class recognition and tested on a Kinect data set. PMID:26513787

  6. A Novel Multi-Purpose Matching Representation of Local 3D Surfaces: A Rotationally Invariant, Efficient, and Highly Discriminative Approach With an Adjustable Sensitivity.

    PubMed

    Al-Osaimi, Faisal R

    2016-02-01

    In this paper, a novel approach to local 3D surface matching representation suitable for a range of 3D vision applications is introduced. Local 3D surface patches around key points on the 3D surface are represented by 2D images such that the representing 2D images enjoy certain characteristics which positively impact the matching accuracy, robustness, and speed. First, the proposed representation is complete, in the sense, there is no information loss during their computation. Second, the 3DoF 2D representations are strictly invariant to all the 3DoF rotations. To optimally avail surface information, the sensitivity of the representations to surface information is adjustable. This also provides the proposed matching representation with the means to optimally adjust to a particular class of problems/applications or an acquisition technology. Each 2D matching representation is a sequence of adjustable integral kernels, where each kernel is efficiently computed from a triple of precise 3D curves (profiles) formed by intersecting three concentric spheres with the 3D surface. Robust techniques for sampling the profiles and establishing correspondences among them were devised. Based on the proposed matching representation, two techniques for the detection of key points were presented. The first is suitable for static images, while the second is suitable for 3D videos. The approach was tested on the face recognition grand challenge v2.0, the 3D twins expression challenge, and the Bosphorus data sets, and a superior face recognition performance was achieved. In addition, the proposed approach was used in object class recognition and tested on a Kinect data set.

  7. Learning from graphically integrated 2D and 3D representations improves retention of neuroanatomy

    NASA Astrophysics Data System (ADS)

    Naaz, Farah

    Visualizations in the form of computer-based learning environments are highly encouraged in science education, especially for teaching spatial material. Some spatial material, such as sectional neuroanatomy, is very challenging to learn. It involves learning the two dimensional (2D) representations that are sampled from the three dimensional (3D) object. In this study, a computer-based learning environment was used to explore the hypothesis that learning sectional neuroanatomy from a graphically integrated 2D and 3D representation will lead to better learning outcomes than learning from a sequential presentation. The integrated representation explicitly demonstrates the 2D-3D transformation and should lead to effective learning. This study was conducted using a computer graphical model of the human brain. There were two learning groups: Whole then Sections, and Integrated 2D3D. Both groups learned whole anatomy (3D neuroanatomy) before learning sectional anatomy (2D neuroanatomy). The Whole then Sections group then learned sectional anatomy using 2D representations only. The Integrated 2D3D group learned sectional anatomy from a graphically integrated 3D and 2D model. A set of tests for generalization of knowledge to interpreting biomedical images was conducted immediately after learning was completed. The order of presentation of the tests of generalization of knowledge was counterbalanced across participants to explore a secondary hypothesis of the study: preparation for future learning. If the computer-based instruction programs used in this study are effective tools for teaching anatomy, the participants should continue learning neuroanatomy with exposure to new representations. A test of long-term retention of sectional anatomy was conducted 4-8 weeks after learning was completed. The Integrated 2D3D group was better than the Whole then Sections

  8. A 2D range Hausdorff approach for 3D face recognition.

    SciTech Connect

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2005-04-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and template datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.

  9. Robust face recognition via sparse representation.

    PubMed

    Wright, John; Yang, Allen Y; Ganesh, Arvind; Sastry, S Shankar; Ma, Yi

    2009-02-01

    We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by l{1}-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as Eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims.

  10. Independent component representations for face recognition

    NASA Astrophysics Data System (ADS)

    Stewart Bartlett, Marian; Lades, Martin H.; Sejnowski, Terrence J.

    1998-07-01

    In a task such as face recognition, much of the important information may be contained in the high-order relationships among the image pixels. A number of face recognition algorithms employ principal component analysis (PCA), which is based on the second-order statistics of the image set, and does not address high-order statistical dependencies such as the relationships among three or more pixels. Independent component analysis (ICA) is a generalization of PCA which separates the high-order moments of the input in addition to the second-order moments. ICA was performed on a set of face images by an unsupervised learning algorithm derived from the principle of optimal information transfer through sigmoidal neurons. The algorithm maximizes the mutual information between the input and the output, which produces statistically independent outputs under certain conditions. ICA was performed on the face images under two different architectures. The first architecture provided a statistically independent basis set for the face images that can be viewed as a set of independent facial features. The second architecture provided a factorial code, in which the probability of any combination of features can be obtained from the product of their individual probabilities. Both ICA representations were superior to representations based on principal components analysis for recognizing faces across sessions and changes in expression.

  11. Robust 3D face landmark localization based on local coordinate coding.

    PubMed

    Song, Mingli; Tao, Dacheng; Sun, Shengpeng; Chen, Chun; Maybank, Stephen J

    2014-12-01

    In the 3D facial animation and synthesis community, input faces are usually required to be labeled by a set of landmarks for parameterization. Because of the variations in pose, expression and resolution, automatic 3D face landmark localization remains a challenge. In this paper, a novel landmark localization approach is presented. The approach is based on local coordinate coding (LCC) and consists of two stages. In the first stage, we perform nose detection, relying on the fact that the nose shape is usually invariant under the variations in the pose, expression, and resolution. Then, we use the iterative closest points algorithm to find a 3D affine transformation that aligns the input face to a reference face. In the second stage, we perform resampling to build correspondences between the input 3D face and the training faces. Then, an LCC-based localization algorithm is proposed to obtain the positions of the landmarks in the input face. Experimental results show that the proposed method is comparable to state of the art methods in terms of its robustness, flexibility, and accuracy. PMID:25296404

  12. Profile of students' comprehension of 3D molecule representation and its interconversion on chirality

    NASA Astrophysics Data System (ADS)

    Setyarini, M.; Liliasari, Kadarohman, Asep; Martoprawiro, Muhamad A.

    2016-02-01

    This study aims at describing (1) students' level comprehension; (2) factors causing difficulties to 3D comprehend molecule representation and its interconversion on chirality. Data was collected using multiple-choice test consisting of eight questions. The participants were required to give answers along with their reasoning. The test was developed based on the indicators of concept comprehension. The study was conducted to 161 college students enrolled in stereochemistry topic in the odd semester (2014/2015) from two LPTK (teacher training institutes) in Bandar Lampung and Gorontalo, and one public university in Bandung. The result indicates that college students' level of comprehension towards 3D molecule representations and its inter-conversion was 5% on high level, 22 % on the moderate level, and 73 % on the low level. The dominant factors identified as the cause of difficulties to comprehend 3D molecule representation and its interconversion were (i) the lack of spatial awareness, (ii) violation of absolute configuration determination rules, (iii) imprecise placement of observers, (iv) the lack of rotation operation, and (v) the lack of understanding of correlation between the representations. This study recommends that learning show more rigorous spatial awareness training tasks accompanied using dynamic visualization media of molecules associated. Also students learned using static molecular models can help them overcome their difficulties encountered.

  13. The impact of specular highlights on 3D-2D face recognition

    NASA Astrophysics Data System (ADS)

    Christlein, Vincent; Riess, Christian; Angelopoulou, Elli; Evangelopoulos, Georgios; Kakadiaris, Ioannis

    2013-05-01

    One of the most popular form of biometrics is face recognition. Face recognition techniques typically assume that a face exhibits Lambertian reectance. However, a face often exhibits prominent specularities, especially in outdoor environments. These specular highlights can compromise an identity authentication. In this work, we analyze the impact of such highlights on a 3D-2D face recognition system. First, we investigate three different specularity removal methods as preprocessing steps for face recognition. Then, we explicitly model facial specularities within the face detection system with the Cook-Torrance reflectance model. In our experiments, specularity removal increases the recognition rate on an outdoor face database by about 5% at a false alarm rate of 10-3. The integration of the Cook-Torrance model further improves these results, increasing the verification rate by 19% at a FAR of 10-3.

  14. Interactive Cosmetic Makeup of a 3D Point-Based Face Model

    NASA Astrophysics Data System (ADS)

    Kim, Jeong-Sik; Choi, Soo-Mi

    We present an interactive system for cosmetic makeup of a point-based face model acquired by 3D scanners. We first enhance the texture of a face model in 3D space using low-pass Gaussian filtering, median filtering, and histogram equalization. The user is provided with a stereoscopic display and haptic feedback, and can perform simulated makeup tasks including the application of foundation, color makeup, and lip gloss. Fast rendering is achieved by processing surfels using the GPU, and we use a BSP tree data structure and a dynamic local refinement of the facial surface to provide interactive haptics. We have implemented a prototype system and evaluated its performance.

  15. High-speed 3D face measurement based on color speckle projection

    NASA Astrophysics Data System (ADS)

    Xue, Junpeng; Su, Xianyu; Zhang, Qican

    2015-03-01

    Nowadays, 3D face recognition has become a subject of considerable interest in the security field due to its unique advantages in domestic and international. However, acquiring color-textured 3D faces data in a fast and accurate manner is still highly challenging. In this paper, a new approach based on color speckle projection for 3D face data dynamic acquisition is proposed. Firstly, the projector-camera color crosstalk matrix that indicates how much each projector channel influences each camera channel is measured. Secondly, the reference-speckle-sets images are acquired with CCD, and then three gray sets are separated from the color sets using the crosstalk matrix and are saved. Finally, the color speckle image which is modulated by face is captured, and it is split three gray channels. We measure the 3D face using multi-sets of speckle correlation methods with color speckle image in high-speed similar as one-shot, which greatly improves the measurement accuracy and stability. The suggested approach has been implemented and the results are supported by experiments.

  16. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    NASA Astrophysics Data System (ADS)

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  17. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    PubMed Central

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-01-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images. PMID:26980176

  18. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation.

    PubMed

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-16

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  19. Learning deformation model for expression-robust 3D face recognition

    NASA Astrophysics Data System (ADS)

    Guo, Zhe; Liu, Shu; Wang, Yi; Lei, Tao

    2015-12-01

    Expression change is the major cause of local plastic deformation of the facial surface. The intra-class differences with large expression change somehow are larger than the inter-class differences as it's difficult to distinguish the same individual with facial expression change. In this paper, an expression-robust 3D face recognition method is proposed by learning expression deformation model. The expression of the individuals on the training set is modeled by principal component analysis, the main components are retained to construct the facial deformation model. For the test 3D face, the shape difference between the test and the neutral face in training set is used for reconstructing the expression change by the constructed deformation model. The reconstruction residual error is used for face recognition. The average recognition rate on GavabDB and self-built database reaches 85.1% and 83%, respectively, which shows strong robustness for expression changes.

  20. Face recognition using 3D facial shape and color map information: comparison and combination

    NASA Astrophysics Data System (ADS)

    Godil, Afzal; Ressler, Sandy; Grother, Patrick

    2004-08-01

    In this paper, we investigate the use of 3D surface geometry for face recognition and compare it to one based on color map information. The 3D surface and color map data are from the CAESAR anthropometric database. We find that the recognition performance is not very different between 3D surface and color map information using a principal component analysis algorithm. We also discuss the different techniques for the combination of the 3D surface and color map information for multi-modal recognition by using different fusion approaches and show that there is significant improvement in results. The effectiveness of various techniques is compared and evaluated on a dataset with 200 subjects in two different positions.

  1. 3D representations of amino acids—applications to protein sequence comparison and classification

    PubMed Central

    Li, Jie; Koehl, Patrice

    2014-01-01

    The amino acid sequence of a protein is the key to understanding its structure and ultimately its function in the cell. This paper addresses the fundamental issue of encoding amino acids in ways that the representation of such a protein sequence facilitates the decoding of its information content. We show that a feature-based representation in a three-dimensional (3D) space derived from amino acid substitution matrices provides an adequate representation that can be used for direct comparison of protein sequences based on geometry. We measure the performance of such a representation in the context of the protein structural fold prediction problem. We compare the results of classifying different sets of proteins belonging to distinct structural folds against classifications of the same proteins obtained from sequence alone or directly from structural information. We find that sequence alone performs poorly as a structure classifier. We show in contrast that the use of the three dimensional representation of the sequences significantly improves the classification accuracy. We conclude with a discussion of the current limitations of such a representation and with a description of potential improvements. PMID:25379143

  2. A novel sensor system for 3D face scanning based on infrared coded light

    NASA Astrophysics Data System (ADS)

    Modrow, Daniel; Laloni, Claudio; Doemens, Guenter; Rigoll, Gerhard

    2008-02-01

    In this paper we present a novel sensor system for three-dimensional face scanning applications. Its operating principle is based on active triangulation with a color coded light approach. As it is implemented in the near infrared band, the used light is invisible for human perception. Though the proposed sensor is primarily designed for face scanning and biometric applications, its performance characteristics are beneficial for technical applications as well. The acquisition of 3d data is real-time capable, provides accurate and high resolution depthmaps and shows high robustness against ambient light. Hence most of the limiting factors of other sensors for 3d and face scanning applications are eliminated, such as blinding and annoying light patterns, motion constraints and highly restricted scenarios due to ambient light constraints.

  3. The Visual Representation of 3D Object Orientation in Parietal Cortex

    PubMed Central

    Cowan, Noah J.; Angelaki, Dora E.

    2013-01-01

    An accurate representation of three-dimensional (3D) object orientation is essential for interacting with the environment. Where and how the brain visually encodes 3D object orientation remains unknown, but prior studies suggest the caudal intraparietal area (CIP) may be involved. Here, we develop rigorous analytical methods for quantifying 3D orientation tuning curves, and use these tools to the study the neural coding of surface orientation. Specifically, we show that single neurons in area CIP of the rhesus macaque jointly encode the slant and tilt of a planar surface, and that across the population, the distribution of preferred slant-tilts is not statistically different from uniform. This suggests that all slant-tilt combinations are equally represented in area CIP. Furthermore, some CIP neurons are found to also represent the third rotational degree of freedom that determines the orientation of the image pattern on the planar surface. Together, the present results suggest that CIP is a critical neural locus for the encoding of all three rotational degrees of freedom specifying an object's 3D spatial orientation. PMID:24305830

  4. Error analysis for creating 3D face templates based on cylindrical quad-tree structure

    NASA Astrophysics Data System (ADS)

    Gutfeter, Weronika

    2015-09-01

    Development of new biometric algorithms is parallel to advances in technology of sensing devices. Some of the limitations of the current face recognition systems may be eliminated by integrating 3D sensors into these systems. Depth sensing devices can capture a spatial structure of the face in addition to the texture and color. This kind of data is yet usually very voluminous and requires large amount of computer resources for being processed (face scans obtained with typical depth cameras contain more than 150 000 points per face). That is why defining efficient data structures for processing spatial images is crucial for further development of 3D face recognition methods. The concept described in this work fulfills the aforementioned demands. Modification of the quad-tree structure was chosen because it can be easily transformed into less dimensional data structures and maintains spatial relations between data points. We are able to interpret data stored in the tree as a pyramid of features which allow us to analyze face images using coarse-to-fine strategy, often exploited in biometric recognition systems.

  5. 3D face recognition using simulated annealing and the surface interpenetration measure.

    PubMed

    Queirolo, Chauã C; Silva, Luciano; Bellon, Olga R P; Segundo, Maurício Pamplona

    2010-02-01

    This paper presents a novel automatic framework to perform 3D face recognition. The proposed method uses a Simulated Annealing-based approach (SA) for range image registration with the Surface Interpenetration Measure (SIM), as similarity measure, in order to match two face images. The authentication score is obtained by combining the SIM values corresponding to the matching of four different face regions: circular and elliptical areas around the nose, forehead, and the entire face region. Then, a modified SA approach is proposed taking advantage of invariant face regions to better handle facial expressions. Comprehensive experiments were performed on the FRGC v2 database, the largest available database of 3D face images composed of 4,007 images with different facial expressions. The experiments simulated both verification and identification systems and the results compared to those reported by state-of-the-art works. By using all of the images in the database, a verification rate of 96.5 percent was achieved at a False Acceptance Rate (FAR) of 0.1 percent. In the identification scenario, a rank-one accuracy of 98.4 percent was achieved. To the best of our knowledge, this is the highest rank-one score ever achieved for the FRGC v2 database when compared to results published in the literature. PMID:20075453

  6. A Two-Stage Framework for 3D Face Reconstruction from RGBD Images.

    PubMed

    Wang, Kangkan; Wang, Xianwang; Pan, Zhigeng; Liu, Kai

    2014-08-01

    This paper proposes a new approach for 3D face reconstruction with RGBD images from an inexpensive commodity sensor. The challenges we face are: 1) substantial random noise and corruption are present in low-resolution depth maps; and 2) there is high degree of variability in pose and face expression. We develop a novel two-stage algorithm that effectively maps low-quality depth maps to realistic face models. Each stage is targeted toward a certain type of noise. The first stage extracts sparse errors from depth patches through the data-driven local sparse coding, while the second stage smooths noise on the boundaries between patches and reconstructs the global shape by combining local shapes using our template-based surface refinement. Our approach does not require any markers or user interaction. We perform quantitative and qualitative evaluations on both synthetic and real test sets. Experimental results show that the proposed approach is able to produce high-resolution 3D face models with high accuracy, even if inputs are of low quality, and have large variations in viewpoint and face expression.

  7. Modeling late rectal toxicities based on a parameterized representation of the 3D dose distribution

    NASA Astrophysics Data System (ADS)

    Buettner, Florian; Gulliford, Sarah L.; Webb, Steve; Partridge, Mike

    2011-04-01

    Many models exist for predicting toxicities based on dose-volume histograms (DVHs) or dose-surface histograms (DSHs). This approach has several drawbacks as firstly the reduction of the dose distribution to a histogram results in the loss of spatial information and secondly the bins of the histograms are highly correlated with each other. Furthermore, some of the complex nonlinear models proposed in the past lack a direct physical interpretation and the ability to predict probabilities rather than binary outcomes. We propose a parameterized representation of the 3D distribution of the dose to the rectal wall which explicitly includes geometrical information in the form of the eccentricity of the dose distribution as well as its lateral and longitudinal extent. We use a nonlinear kernel-based probabilistic model to predict late rectal toxicity based on the parameterized dose distribution and assessed its predictive power using data from the MRC RT01 trial (ISCTRN 47772397). The endpoints under consideration were rectal bleeding, loose stools, and a global toxicity score. We extract simple rules identifying 3D dose patterns related to a specifically low risk of complication. Normal tissue complication probability (NTCP) models based on parameterized representations of geometrical and volumetric measures resulted in areas under the curve (AUCs) of 0.66, 0.63 and 0.67 for predicting rectal bleeding, loose stools and global toxicity, respectively. In comparison, NTCP models based on standard DVHs performed worse and resulted in AUCs of 0.59 for all three endpoints. In conclusion, we have presented low-dimensional, interpretable and nonlinear NTCP models based on the parameterized representation of the dose to the rectal wall. These models had a higher predictive power than models based on standard DVHs and their low dimensionality allowed for the identification of 3D dose patterns related to a low risk of complication.

  8. An orthognathic simulation system integrating teeth, jaw and face data using 3D cephalometry.

    PubMed

    Noguchi, N; Tsuji, M; Shigematsu, M; Goto, M

    2007-07-01

    A method for simulating the movement of teeth, jaw and face caused by orthognathic surgery is proposed, characterized by the use of 3D cephalometric data for 3D simulation. Computed tomography data are not required. The teeth and facial data are obtained by a laser scanner and the data for the patient's mandible are reconstructed and integrated according to 3D cephalometry using a projection-matching technique. The mandibular form is simulated by transforming a generic model to match the patient's cephalometric data. This system permits analysis of bone movement at each individual part, while also helping in the choice of optimal osteotomy design considering the influences on facial soft-tissue form.

  9. 3D hierarchical spatial representation and memory of multimodal sensory data

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Dow, Paul A.; Huber, David J.

    2009-04-01

    This paper describes an efficient method and system for representing, processing and understanding multi-modal sensory data. More specifically, it describes a computational method and system for how to process and remember multiple locations in multimodal sensory space (e.g., visual, auditory, somatosensory, etc.). The multimodal representation and memory is based on a biologically-inspired hierarchy of spatial representations implemented with novel analogues of real representations used in the human brain. The novelty of the work is in the computationally efficient and robust spatial representation of 3D locations in multimodal sensory space as well as an associated working memory for storage and recall of these representations at the desired level for goal-oriented action. We describe (1) A simple and efficient method for human-like hierarchical spatial representations of sensory data and how to associate, integrate and convert between these representations (head-centered coordinate system, body-centered coordinate, etc.); (2) a robust method for training and learning a mapping of points in multimodal sensory space (e.g., camera-visible object positions, location of auditory sources, etc.) to the above hierarchical spatial representations; and (3) a specification and implementation of a hierarchical spatial working memory based on the above for storage and recall at the desired level for goal-oriented action(s). This work is most useful for any machine or human-machine application that requires processing of multimodal sensory inputs, making sense of it from a spatial perspective (e.g., where is the sensory information coming from with respect to the machine and its parts) and then taking some goal-oriented action based on this spatial understanding. A multi-level spatial representation hierarchy means that heterogeneous sensory inputs (e.g., visual, auditory, somatosensory, etc.) can map onto the hierarchy at different levels. When controlling various machine

  10. Template protection and its implementation in 3D face recognition systems

    NASA Astrophysics Data System (ADS)

    Zhou, Xuebing

    2007-04-01

    As biometric recognition systems are widely applied in various application areas, security and privacy risks have recently attracted the attention of the biometric community. Template protection techniques prevent stored reference data from revealing private biometric information and enhance the security of biometrics systems against attacks such as identity theft and cross matching. This paper concentrates on a template protection algorithm that merges methods from cryptography, error correction coding and biometrics. The key component of the algorithm is to convert biometric templates into binary vectors. It is shown that the binary vectors should be robust, uniformly distributed, statistically independent and collision-free so that authentication performance can be optimized and information leakage can be avoided. Depending on statistical character of the biometric template, different approaches for transforming biometric templates into compact binary vectors are presented. The proposed methods are integrated into a 3D face recognition system and tested on the 3D facial images of the FRGC database. It is shown that the resulting binary vectors provide an authentication performance that is similar to the original 3D face templates. A high security level is achieved with reasonable false acceptance and false rejection rates of the system, based on an efficient statistical analysis. The algorithm estimates the statistical character of biometric templates from a number of biometric samples in the enrollment database. For the FRGC 3D face database, the small distinction of robustness and discriminative power between the classification results under the assumption of uniquely distributed templates and the ones under the assumption of Gaussian distributed templates is shown in our tests.

  11. Realistic texture extraction for 3D face models robust to self-occlusion

    NASA Astrophysics Data System (ADS)

    Qu, Chengchao; Monari, Eduardo; Schuchert, Tobias; Beyerer, Jürgen

    2015-02-01

    In the context of face modeling, probably the most well-known approach to represent 3D faces is the 3D Morphable Model (3DMM). When 3DMM is fitted to a 2D image, the shape as well as the texture and illumination parameters are simultaneously estimated. However, if real facial texture is needed, texture extraction from the 2D image is necessary. This paper addresses the possible problems in texture extraction of a single image caused by self-occlusion. Unlike common approaches that leverage the symmetric property of the face by mirroring the visible facial part, which is sensitive to inhomogeneous illumination, this work first generates a virtual texture map for the skin area iteratively by averaging the color of neighbored vertices. Although this step creates unrealistic, overly smoothed texture, illumination stays constant between the real and virtual texture. In the second pass, the mirrored texture is gradually blended with the real or generated texture according to the visibility. This scheme ensures a gentle handling of illumination and yet yields realistic texture. Because the blending area only relates to non-informative area, main facial features still have unique appearance in different face halves. Evaluation results reveal realistic rendering in novel poses robust to challenging illumination conditions and small registration errors.

  12. Comparing the Roles of Representations in Face-to-Face and Online Computer Supported Collaborative Learning

    ERIC Educational Resources Information Center

    Suthers, Daniel D.; Hundhausen, Christopher D.; Girardeau, Laura E.

    2003-01-01

    This paper reports an empirical study comparing the role of discourse and knowledge representations (graphical evidence mapping) in face-to-face versus synchronous online collaborative learning. Prior work in face-to-face collaborative learning situations has shown that the features of representational notations can influence the focus of…

  13. Separating the Representation from the Science: Training Students in Comprehending 3D Diagrams

    NASA Astrophysics Data System (ADS)

    Bemis, K. G.; Silver, D.; Chiang, J.; Halpern, D.; Oh, K.; Tremaine, M.

    2011-12-01

    Studies of students taking first year geology and earth science courses at universities find that a remarkable number of them are confused by the three-dimensional representations used to explain the science [1]. Comprehension of these 3D representations has been found to be related to an individual's spatial ability [2]. A variety of interactive programs and animations have been created to help explain the diagrams to beginning students [3, 4]. This work has demonstrated comprehension improvement and removed a gender gap between male (high spatial) and female (low spatial) students [5]. However, not much research has examined what makes the 3D diagrams so hard to understand or attempted to build a theory for creating training designed to remove these difficulties. Our work has separated the science labeling and comprehension of the diagrams from the visualizations to examine how individuals mentally see the visualizations alone. In particular, we asked subjects to create a cross-sectional drawing of the internal structure of various 3D diagrams. We found that viewing planes (the coordinate system the designer applies to the diagram), cutting planes (the planes formed by the requested cross sections) and visual property planes (the planes formed by the prominent features of the diagram, e.g., a layer at an angle of 30 degrees to the top surface of the diagram) that deviated from a Cartesian coordinate system imposed by the viewer caused significant problems for subjects, in part because these deviations forced them to mentally re-orient their viewing perspective. Problems with deviations in all three types of plane were significantly harder than those deviating on one or two planes. Our results suggest training that does not focus on showing how the components of various 3D geologic formations are put together but rather training that guides students in re-orienting themselves to deviations that differ from their right-angle view of the world, e.g., by showing how

  14. Robust and Blind 3D Mesh Watermarking in Spatial Domain Based on Faces Categorization and Sorting

    NASA Astrophysics Data System (ADS)

    Molaei, Amir Masoud; Ebrahimnezhad, Hossein; Sedaaghi, Mohammad Hossein

    2016-06-01

    In this paper, a 3D watermarking algorithm in spatial domain is presented with blind detection. In the proposed method, a negligible visual distortion is observed in host model. Initially, a preprocessing is applied on the 3D model to make it robust against geometric transformation attacks. Then, a number of triangle faces are determined as mark triangles using a novel systematic approach in which faces are categorized and sorted robustly. In order to enhance the capability of information retrieval by attacks, block watermarks are encoded using Reed-Solomon block error-correcting code before embedding into the mark triangles. Next, the encoded watermarks are embedded in spherical coordinates. The proposed method is robust against additive noise, mesh smoothing and quantization attacks. Also, it is stout next to geometric transformation, vertices and faces reordering attacks. Moreover, the proposed algorithm is designed so that it is robust against the cropping attack. Simulation results confirm that the watermarked models confront very low distortion if the control parameters are selected properly. Comparison with other methods demonstrates that the proposed method has good performance against the mesh smoothing attacks.

  15. Two Eyes, 3D Early Results: Stereoscopic vs 2D Representations of Highly Spatial Scientific Imagery

    NASA Astrophysics Data System (ADS)

    Price, Aaron

    2013-06-01

    "Two Eyes, 3D" is a 3-year NSF funded research project to study the educational impacts of using stereoscopic representations in informal settings. The first study conducted as part of the project tested children 5-12 on their ability to perceive spatial elements of slides of scientific objects shown to them in either stereoscopic or 2D format. Children were also tested for prior spatial ability. Early results suggest that stereoscopy does not have a major impact on perceiving spatial elements of an image, but it does have a more significant impact on how the children apply that knowledge when presented with a common sense situation. The project is run by the AAVSO and this study was conducted at the Boston Museum of Science.

  16. Compact encoding of 3-D voxel surfaces based on pattern code representation.

    PubMed

    Kim, Chang-Su; Lee, Sang-Uk

    2002-01-01

    In this paper, we propose a lossless compression algorithm for three-dimensional (3-D) binary voxel surfaces, based on the pattern code representation (PCR). In PCR, a voxel surface is represented by a series of pattern codes. The pattern of a voxel v is defined as the 3 x 3 x 3 array of voxels, centered on v. Therefore, the pattern code for informs of the local shape of the voxel surface around . The proposed algorithm can achieve the coding gain, since the patterns of adjacent voxels are highly correlated to each other. The performance of the proposed algorithm is evaluated using various voxel surfaces, which are scan-converted from triangular mesh models. It is shown that the proposed algorithm requires only 0.5 approximately 1 bits per black voxel (bpbv) to store or transmit the voxel surfaces.

  17. From Parts to Identity: Invariance and Sensitivity of Face Representations to Different Face Halves.

    PubMed

    Anzellotti, Stefano; Caramazza, Alfonso

    2016-05-01

    Recognizing the identity of a face is computationally challenging, because it requires distinguishing between similar images depicting different people, while recognizing even very different images depicting a same person. Previous human fMRI studies investigated representations of face identity in the presence of changes in viewpoint and in expression. Despite the importance of holistic processing for face recognition, an investigation of representations of face identity across different face parts is missing. To fill this gap, we investigated representations of face identity and their invariance across different face halves. Information about face identity with invariance across changes in the face half was individuated in the right anterior temporal lobe, indicating this region as the most plausible candidate brain area for the representation of face identity. In a complementary analysis, information distinguishing between different face halves was found to decline along the posterior to anterior axis in the ventral stream. PMID:25628344

  18. From Parts to Identity: Invariance and Sensitivity of Face Representations to Different Face Halves.

    PubMed

    Anzellotti, Stefano; Caramazza, Alfonso

    2016-05-01

    Recognizing the identity of a face is computationally challenging, because it requires distinguishing between similar images depicting different people, while recognizing even very different images depicting a same person. Previous human fMRI studies investigated representations of face identity in the presence of changes in viewpoint and in expression. Despite the importance of holistic processing for face recognition, an investigation of representations of face identity across different face parts is missing. To fill this gap, we investigated representations of face identity and their invariance across different face halves. Information about face identity with invariance across changes in the face half was individuated in the right anterior temporal lobe, indicating this region as the most plausible candidate brain area for the representation of face identity. In a complementary analysis, information distinguishing between different face halves was found to decline along the posterior to anterior axis in the ventral stream.

  19. Improving low-dose cardiac CT images using 3D sparse representation based processing

    NASA Astrophysics Data System (ADS)

    Shi, Luyao; Chen, Yang; Luo, Limin

    2015-03-01

    Cardiac computed tomography (CCT) has been widely used in diagnoses of coronary artery diseases due to the continuously improving temporal and spatial resolution. When helical CT with a lower pitch scanning mode is used, the effective radiation dose can be significant when compared to other radiological exams. Many methods have been developed to reduce radiation dose in coronary CT exams including high pitch scans using dual source CT scanners and step-and-shot scanning mode for both single source and dual source CT scanners. Additionally, software methods have also been proposed to reduce noise in the reconstructed CT images and thus offering the opportunity to reduce radiation dose while maintaining the desired diagnostic performance of a certain imaging task. In this paper, we propose that low-dose scans should be considered in order to avoid the harm from accumulating unnecessary X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. Accordingly, in this paper, a 3D dictionary representation based image processing method is proposed to reduce CT image noise. Information on both spatial and temporal structure continuity is utilized in sparse representation to improve the performance of the image processing method. Clinical cases were used to validate the proposed method.

  20. Sparse Representation of Deformable 3D Organs with Spherical Harmonics and Structured Dictionary

    PubMed Central

    Wang, Dan; Tewfik, Ahmed H.; Zhang, Yingchun; Shen, Yunhe

    2011-01-01

    This paper proposed a novel algorithm to sparsely represent a deformable surface (SRDS) with low dimensionality based on spherical harmonic decomposition (SHD) and orthogonal subspace pursuit (OSP). The key idea in SRDS method is to identify the subspaces from a training data set in the transformed spherical harmonic domain and then cluster each deformation into the best-fit subspace for fast and accurate representation. This algorithm is also generalized into applications of organs with both interior and exterior surfaces. To test the feasibility, we first use the computer models to demonstrate that the proposed approach matches the accuracy of complex mathematical modeling techniques and then both ex vivo and in vivo experiments are conducted using 3D magnetic resonance imaging (MRI) scans for verification in practical settings. All results demonstrated that the proposed algorithm features sparse representation of deformable surfaces with low dimensionality and high accuracy. Specifically, the precision evaluated as maximum error distance between the reconstructed surface and the MRI ground truth is better than 3 mm in real MRI experiments. PMID:21941524

  1. Prediction of 3D chip formation in the facing cutting with lathe machine using FEM

    NASA Astrophysics Data System (ADS)

    Prasetyo, Yudhi; Tauviqirrahman, Mohamad; Rusnaldy

    2016-04-01

    This paper presents the prediction of the chip formation at the machining process using a lathe machine in a more specific way focusing on facing cutting (face turning). The main purpose is to propose a new approach to predict the chip formation with the variation of the cutting directions i.e., the backward and forward direction. In addition, the interaction between stress analysis and chip formation on cutting process was also investigated. The simulations were conducted using three dimensional (3D) finite element method based on ABAQUS software with aluminum and high speed steel (HSS) as the workpiece and the tool materials, respectively. The simulation result showed that the chip resulted using a backward direction depicts a better formation than that using a conventional (forward) direction.

  2. A general framework for face reconstruction using single still image based on 2D-to-3D transformation kernel.

    PubMed

    Fooprateepsiri, Rerkchai; Kurutach, Werasak

    2014-03-01

    Face authentication is a biometric classification method that verifies the identity of a user based on image of their face. Accuracy of the authentication is reduced when the pose, illumination and expression of the training face images are different than the testing image. The methods in this paper are designed to improve the accuracy of a features-based face recognition system when the pose between the input images and training images are different. First, an efficient 2D-to-3D integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination. Second, realistic virtual faces with different poses are synthesized based on the personalized 3D face to characterize the face subspace. Finally, face recognition is conducted based on these representative virtual faces. Compared with other related works, this framework has the following advantages: (1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; and (2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex pose, illumination and expression. From the experimental results, we conclude that the proposed method improves the accuracy of face recognition by varying the pose, illumination and expression. PMID:24529782

  3. A general framework for face reconstruction using single still image based on 2D-to-3D transformation kernel.

    PubMed

    Fooprateepsiri, Rerkchai; Kurutach, Werasak

    2014-03-01

    Face authentication is a biometric classification method that verifies the identity of a user based on image of their face. Accuracy of the authentication is reduced when the pose, illumination and expression of the training face images are different than the testing image. The methods in this paper are designed to improve the accuracy of a features-based face recognition system when the pose between the input images and training images are different. First, an efficient 2D-to-3D integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination. Second, realistic virtual faces with different poses are synthesized based on the personalized 3D face to characterize the face subspace. Finally, face recognition is conducted based on these representative virtual faces. Compared with other related works, this framework has the following advantages: (1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; and (2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex pose, illumination and expression. From the experimental results, we conclude that the proposed method improves the accuracy of face recognition by varying the pose, illumination and expression.

  4. Novel irregular mesh tagging algorithm for wound synthesis on a 3D face.

    PubMed

    Lee, Sangyong; Chin, Seongah

    2015-01-01

    Recently, advanced visualizing techniques in computer graphics have considerably enhanced the visual appearance of synthetic models. To realize enhanced visual graphics for synthetic medical effects, the first step followed by rendering techniques involves attaching albedo textures to the region where a certain graphic is to be rendered. For instance, in order to render wound textures efficiently, the first step is to recognize the area where the user wants to attach a wound. However, in general, face indices are not stored in sequential order, which makes sub-texturing difficult. In this paper, we present a novel mesh tagging algorithm that utilizes a task for mesh traversals and level extension in the general case of a wound sub-texture mapping and a selected region deformation in a three-dimensional (3D) model. This method works automatically on both regular and irregular mesh surfaces. The approach consists of mesh selection (MS), mesh leveling (ML), and mesh tagging (MT). To validate our approach, we performed experiments for synthesizing wounds on a 3D face model and on a simulated mesh. PMID:26405904

  5. Familiar face + novel face = familiar face? Representational bias in the perception of morphed faces in chimpanzees

    PubMed Central

    Myowa-Yamakoshi, Masako

    2016-01-01

    Highly social animals possess a well-developed ability to distinguish the faces of familiar from novel conspecifics to induce distinct behaviors for maintaining society. However, the behaviors of animals when they encounter ambiguous faces of familiar yet novel conspecifics, e.g., strangers with faces resembling known individuals, have not been well characterised. Using a morphing technique and preferential-looking paradigm, we address this question via the chimpanzee’s facial–recognition abilities. We presented eight subjects with three types of stimuli: (1) familiar faces, (2) novel faces and (3) intermediate morphed faces that were 50% familiar and 50% novel faces of conspecifics. We found that chimpanzees spent more time looking at novel faces and scanned novel faces more extensively than familiar or intermediate faces. Interestingly, chimpanzees looked at intermediate faces in a manner similar to familiar faces with regards to the fixation duration, fixation count, and saccade length for facial scanning, even though the participant was encountering the intermediate faces for the first time. We excluded the possibility that subjects merely detected and avoided traces of morphing in the intermediate faces. These findings suggest a bias for a feeling-of-familiarity that chimpanzees perceive familiarity with an intermediate face by detecting traces of a known individual, as 50% alternation is sufficient to perceive familiarity. PMID:27602275

  6. Familiar face + novel face = familiar face? Representational bias in the perception of morphed faces in chimpanzees.

    PubMed

    Matsuda, Yoshi-Taka; Myowa-Yamakoshi, Masako; Hirata, Satoshi

    2016-01-01

    Highly social animals possess a well-developed ability to distinguish the faces of familiar from novel conspecifics to induce distinct behaviors for maintaining society. However, the behaviors of animals when they encounter ambiguous faces of familiar yet novel conspecifics, e.g., strangers with faces resembling known individuals, have not been well characterised. Using a morphing technique and preferential-looking paradigm, we address this question via the chimpanzee's facial-recognition abilities. We presented eight subjects with three types of stimuli: (1) familiar faces, (2) novel faces and (3) intermediate morphed faces that were 50% familiar and 50% novel faces of conspecifics. We found that chimpanzees spent more time looking at novel faces and scanned novel faces more extensively than familiar or intermediate faces. Interestingly, chimpanzees looked at intermediate faces in a manner similar to familiar faces with regards to the fixation duration, fixation count, and saccade length for facial scanning, even though the participant was encountering the intermediate faces for the first time. We excluded the possibility that subjects merely detected and avoided traces of morphing in the intermediate faces. These findings suggest a bias for a feeling-of-familiarity that chimpanzees perceive familiarity with an intermediate face by detecting traces of a known individual, as 50% alternation is sufficient to perceive familiarity. PMID:27602275

  7. Familiar face + novel face = familiar face? Representational bias in the perception of morphed faces in chimpanzees

    PubMed Central

    Myowa-Yamakoshi, Masako

    2016-01-01

    Highly social animals possess a well-developed ability to distinguish the faces of familiar from novel conspecifics to induce distinct behaviors for maintaining society. However, the behaviors of animals when they encounter ambiguous faces of familiar yet novel conspecifics, e.g., strangers with faces resembling known individuals, have not been well characterised. Using a morphing technique and preferential-looking paradigm, we address this question via the chimpanzee’s facial–recognition abilities. We presented eight subjects with three types of stimuli: (1) familiar faces, (2) novel faces and (3) intermediate morphed faces that were 50% familiar and 50% novel faces of conspecifics. We found that chimpanzees spent more time looking at novel faces and scanned novel faces more extensively than familiar or intermediate faces. Interestingly, chimpanzees looked at intermediate faces in a manner similar to familiar faces with regards to the fixation duration, fixation count, and saccade length for facial scanning, even though the participant was encountering the intermediate faces for the first time. We excluded the possibility that subjects merely detected and avoided traces of morphing in the intermediate faces. These findings suggest a bias for a feeling-of-familiarity that chimpanzees perceive familiarity with an intermediate face by detecting traces of a known individual, as 50% alternation is sufficient to perceive familiarity.

  8. Supervised Filter Learning for Representation Based Face Recognition

    PubMed Central

    Bi, Chao; Zhang, Lei; Qi, Miao; Zheng, Caixia; Yi, Yugen; Wang, Jianzhong; Zhang, Baoxue

    2016-01-01

    Representation based classification methods, such as Sparse Representation Classification (SRC) and Linear Regression Classification (LRC) have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances) in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP) features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm. PMID:27416030

  9. Supervised Filter Learning for Representation Based Face Recognition.

    PubMed

    Bi, Chao; Zhang, Lei; Qi, Miao; Zheng, Caixia; Yi, Yugen; Wang, Jianzhong; Zhang, Baoxue

    2016-01-01

    Representation based classification methods, such as Sparse Representation Classification (SRC) and Linear Regression Classification (LRC) have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances) in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP) features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm. PMID:27416030

  10. Scale Space Graph Representation and Kernel Matching for Non Rigid and Textured 3D Shape Retrieval.

    PubMed

    Garro, Valeria; Giachetti, Andrea

    2016-06-01

    In this paper we introduce a novel framework for 3D object retrieval that relies on tree-based shape representations (TreeSha) derived from the analysis of the scale-space of the Auto Diffusion Function (ADF) and on specialized graph kernels designed for their comparison. By coupling maxima of the Auto Diffusion Function with the related basins of attraction, we can link the information at different scales encoding spatial relationships in a graph description that is isometry invariant and can easily incorporate texture and additional geometrical information as node and edge features. Using custom graph kernels it is then possible to estimate shape dissimilarities adapted to different specific tasks and on different categories of models, making the procedure a powerful and flexible tool for shape recognition and retrieval. Experimental results demonstrate that the method can provide retrieval scores similar or better than state-of-the-art on textured and non textured shape retrieval benchmarks and give interesting insights on effectiveness of different shape descriptors and graph kernels.

  11. The 3D representation of the new transformation from the terrestrial to the celestial system.

    NASA Astrophysics Data System (ADS)

    Dehant, V.; de Viron, O.; Capitaine, N.

    2006-08-01

    To study the sky from the Earth or to use navigation satellites, we need two reference systems, a celestial reference system, as fixed as possible with respect to the inertial frame, and a terrestrial reference system, rotating with the Earth. Additionally, we need a way to go from one reference system to the other. This transformation involves the Earth rotation rate, the polar motion, and the precession-nutation. This transformation is done using an intermediate system, in which the Earth rotation it-self is corrected for. Previously one used an intermediate system related to the equinox; the new paradigm involved a point, denoted the Celestial Intermediate Origin (CIO), which, due to its kinematical property of "Non Rotating Origin", allows better describing the length-of-day of the Earth. The use or not of the CIO only affects this intermediate frame. The new transformation system involving the CIO is additionally much simpler. Moreover, the use of the CIO allows an elegant separation between the polar motion, the precession nutation and the rotation rate variation. In this presentation we will show 3D representations that explain all this.

  12. Flow control on a 3D backward facing ramp by pulsed jets

    NASA Astrophysics Data System (ADS)

    Joseph, Pierric; Bortolus, Dorian; Grasso, Francesco

    2014-06-01

    This paper presents an experimental study of flow separation control over a 3D backward facing ramp by means of pulsed jets. Such geometry has been selected to reproduce flow phenomena of interest for the automotive industry. The base flow has been characterised using PIV and pressure measurements. The results show that the classical notchback topology is correctly reproduced. A control system based on magnetic valves has been used to produce the pulsed jets whose properties have been characterised by hot wire anemometry. In order to shed some light on the role of the different parameters affecting the suppression of the slant recirculation area, a parametric study has been carried out by varying the frequency and the momentum coefficient of the jets for several Reynolds numbers. xml:lang="fr"

  13. Reconstructing 3D Face Model with Associated Expression Deformation from a Single Face Image via Constructing a Low-Dimensional Expression Deformation Manifold.

    PubMed

    Wang, Shu-Fan; Lai, Shang-Hong

    2011-10-01

    Facial expression modeling is central to facial expression recognition and expression synthesis for facial animation. In this work, we propose a manifold-based 3D face reconstruction approach to estimating the 3D face model and the associated expression deformation from a single face image. With the proposed robust weighted feature map (RWF), we can obtain the dense correspondences between 3D face models and build a nonlinear 3D expression manifold from a large set of 3D facial expression models. Then a Gaussian mixture model in this manifold is learned to represent the distribution of expression deformation. By combining the merits of morphable neutral face model and the low-dimensional expression manifold, a novel algorithm is developed to reconstruct the 3D face geometry as well as the facial deformation from a single face image in an energy minimization framework. Experimental results on simulated and real images are shown to validate the effectiveness and accuracy of the proposed algorithm. PMID:21576739

  14. Three 3D graphical representations of DNA primary sequences based on the classifications of DNA bases and their applications.

    PubMed

    Xie, Guosen; Mo, Zhongxi

    2011-01-21

    In this article, we introduce three 3D graphical representations of DNA primary sequences, which we call RY-curve, MK-curve and SW-curve, based on three classifications of the DNA bases. The advantages of our representations are that (i) these 3D curves are strictly non-degenerate and there is no loss of information when transferring a DNA sequence to its mathematical representation and (ii) the coordinates of every node on these 3D curves have clear biological implication. Two applications of these 3D curves are presented: (a) a simple formula is derived to calculate the content of the four bases (A, G, C and T) from the coordinates of nodes on the curves; and (b) a 12-component characteristic vector is constructed to compare similarity among DNA sequences from different species based on the geometrical centers of the 3D curves. As examples, we examine similarity among the coding sequences of the first exon of beta-globin gene from eleven species and validate similarity of cDNA sequences of beta-globin gene from eight species.

  15. Anticipatory Spatial Representation of 3D Regions Explored by Sighted Observers and a Deaf-and-Blind-Observer

    ERIC Educational Resources Information Center

    Intraub, Helene

    2004-01-01

    Viewers who study photographs of scenes tend to remember having seen beyond the boundaries of the view ["boundary extension"; J. Exp. Psychol. Learn. Mem. Cogn. 15 (1989) 179]. Is this a fundamental aspect of scene representation? Forty undergraduates explored bounded regions of six common (3D) scenes, visually or haptically (while blindfolded)…

  16. Identity from Variation: Representations of Faces Derived from Multiple Instances

    ERIC Educational Resources Information Center

    Burton, A. Mike; Kramer, Robin S. S.; Ritchie, Kay L.; Jenkins, Rob

    2016-01-01

    Research in face recognition has tended to focus on discriminating between individuals, or "telling people apart." It has recently become clear that it is also necessary to understand how images of the same person can vary, or "telling people together." Learning a new face, and tracking its representation as it changes from…

  17. Study on Information Management for the Conservation of Traditional Chinese Architectural Heritage - 3d Modelling and Metadata Representation

    NASA Astrophysics Data System (ADS)

    Yen, Y. N.; Weng, K. H.; Huang, H. Y.

    2013-07-01

    After over 30 years of practise and development, Taiwan's architectural conservation field is moving rapidly into digitalization and its applications. Compared to modern buildings, traditional Chinese architecture has considerably more complex elements and forms. To document and digitize these unique heritages in their conservation lifecycle is a new and important issue. This article takes the caisson ceiling of the Taipei Confucius Temple, octagonal with 333 elements in 8 types, as a case study for digitization practise. The application of metadata representation and 3D modelling are the two key issues to discuss. Both Revit and SketchUp were appliedin this research to compare its effectiveness to metadata representation. Due to limitation of the Revit database, the final 3D models wasbuilt with SketchUp. The research found that, firstly, cultural heritage databasesmustconvey that while many elements are similar in appearance, they are unique in value; although 3D simulations help the general understanding of architectural heritage, software such as Revit and SketchUp, at this stage, could onlybe used tomodel basic visual representations, and is ineffective indocumenting additional critical data ofindividually unique elements. Secondly, when establishing conservation lifecycle information for application in management systems, a full and detailed presentation of the metadata must also be implemented; the existing applications of BIM in managing conservation lifecycles are still insufficient. Results of the research recommends SketchUp as a tool for present modelling needs, and BIM for sharing data between users, but the implementation of metadata representation is of the utmost importance.

  18. Combination of direct matching and collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Chongyang

    2013-06-01

    It has been proved that representation-based classification (RBC) can achieve high accuracy in face recognition. However, conventional RBC has a very high computational cost. Collaborative representation proposed in [1] not only has the advantages of RBC but also is computationally very efficient. In this paper, a combination of direct matching of images and collaborative representation is proposed for face recognition. Experimental results show that the proposed method can always classify more accurately than collaborative representation! The underlying reason is that direct matching of images and collaborative representation use different ways to calculate the dissimilarity between the test sample and training sample. As a result, the score obtained using direct matching of images is very complementary to the score obtained using collaborative representation. Actually, the analysis shows that the matching scores generated from direct matching of images and collaborative representation always have a low correlation. This allows the proposed method to exploit more information for face recognition and to produce a better result.

  19. Does My Face FIT?: A Face Image Task Reveals Structure and Distortions of Facial Feature Representation

    PubMed Central

    Fuentes, Christina T.; Runa, Catarina; Blanco, Xenxo Alvarez; Orvalho, Verónica; Haggard, Patrick

    2013-01-01

    Despite extensive research on face perception, few studies have investigated individuals’ knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual’s features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one’s own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one’s own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness. PMID:24130790

  20. Identity From Variation: Representations of Faces Derived From Multiple Instances.

    PubMed

    Burton, A Mike; Kramer, Robin S S; Ritchie, Kay L; Jenkins, Rob

    2016-01-01

    Research in face recognition has tended to focus on discriminating between individuals, or "telling people apart." It has recently become clear that it is also necessary to understand how images of the same person can vary, or "telling people together." Learning a new face, and tracking its representation as it changes from unfamiliar to familiar, involves an abstraction of the variability in different images of that person's face. Here, we present an application of principal components analysis computed across different photos of the same person. We demonstrate that people vary in systematic ways, and that this variability is idiosyncratic-the dimensions of variability in one face do not generalize well to another. Learning a new face therefore entails learning how that face varies. We present evidence for this proposal and suggest that it provides an explanation for various effects in face recognition. We conclude by making a number of testable predictions derived from this framework. PMID:25824013

  1. Extraction and refinement of building faces in 3D point clouds

    NASA Astrophysics Data System (ADS)

    Pohl, Melanie; Meidow, Jochen; Bulatov, Dimitri

    2013-10-01

    In this paper, we present an approach to generate a 3D model of an urban scene out of sensor data. The first milestone on that way is to classify the sensor data into the main parts of a scene, such as ground, vegetation, buildings and their outlines. This has already been accomplished within our previous work. Now, we propose a four-step algorithm to model the building structure, which is assumed to consist of several dominant planes. First, we extract small elevated objects, like chimneys, using a hot-spot detector and handle the detected regions separately. In order to model the variety of roof structures precisely, we split up complex building blocks into parts. Two different approaches are used: To act on the assumption of underlying 2D ground polygons, we use geometric methods to divide them into sub-polygons. Without polygons, we use morphological operations and segmentation methods. In the third step, extraction of dominant planes takes place, by using either RANSAC or J-linkage algorithm. They operate on point clouds of sufficient confidence within the previously separated building parts and give robust results even with noisy, outlier-rich data. Last, we refine the previously determined plane parameters using geometric relations of the building faces. Due to noise, these expected properties of roofs and walls are not fulfilled. Hence, we enforce them as hard constraints and use the previously extracted plane parameters as initial values for an optimization method. To test the proposed workflow, we use both several data sets, including noisy data from depth maps and data computed by laser scanning.

  2. Uncertainty analysis in 3D global models: Aerosol representation in MOZART-4

    NASA Astrophysics Data System (ADS)

    Gasore, J.; Prinn, R. G.

    2012-12-01

    The Probabilistic Collocation Method (PCM) has been proven to be an efficient general method of uncertainty analysis in atmospheric models (Tatang et al 1997, Cohen&Prinn 2011). However, its application has been mainly limited to urban- and regional-scale models and chemical source-sink models, because of the drastic increase in computational cost when the dimension of uncertain parameters increases. Moreover, the high-dimensional output of global models has to be reduced to allow a computationally reasonable number of polynomials to be generated. This dimensional reduction has been mainly achieved by grouping the model grids into a few regions based on prior knowledge and expectations; urban versus rural for instance. As the model output is used to estimate the coefficients of the polynomial chaos expansion (PCE), the arbitrariness in the regional aggregation can generate problems in estimating uncertainties. To address these issues in a complex model, we apply the probabilistic collocation method of uncertainty analysis to the aerosol representation in MOZART-4, which is a 3D global chemical transport model (Emmons et al., 2010). Thereafter, we deterministically delineate the model output surface into regions of homogeneous response using the method of Principal Component Analysis. This allows the quantification of the uncertainty associated with the dimensional reduction. Because only a bulk mass is calculated online in Mozart-4, a lognormal number distribution is assumed with a priori fixed scale and location parameters, to calculate the surface area for heterogeneous reactions involving tropospheric oxidants. We have applied the PCM to the six parameters of the lognormal number distributions of Black Carbon, Organic Carbon and Sulfate. We have carried out a Monte-Carlo sampling from the probability density functions of the six uncertain parameters, using the reduced PCE model. The global mean concentration of major tropospheric oxidants did not show a

  3. Cosine series representation of 3D curves and its application to white matter fiber bundles in diffusion tensor imaging

    PubMed Central

    Adluru, Nagesh; Lee, Jee Eun; Lazar, Mariana; Lainhart, Janet E.; Alexander, Andrew L.

    2011-01-01

    We present a novel cosine series representation for encoding fiber bundles consisting of multiple 3D curves. The coordinates of curves are parameterized as coefficients of cosine series expansion. We address the issue of registration, averaging and statistical inference on curves in a unified Hilbert space framework. Unlike traditional splines, the proposed method does not have internal knots and explicitly represents curves as a linear combination of cosine basis. This simplicity in the representation enables us to design statistical models, register curves and perform subsequent analysis in a more unified statistical framework than splines. The proposed representation is applied in characterizing abnormal shape of white matter fiber tracts passing through the splenium of the corpus callosum in autistic subjects. For an arbitrary tract, a 19 degree expansion is usually found to be sufficient to reconstruct the tract with 60 parameters. PMID:23316267

  4. Computational hologram synthesis and representation on spatial light modulators for real-time 3D holographic imaging

    NASA Astrophysics Data System (ADS)

    Reichelt, Stephan; Leister, Norbert

    2013-02-01

    In dynamic computer-generated holography that utilizes spatial light modulators, both hologram synthesis and hologram representation are essential in terms of fast computation and high reconstruction quality. For hologram synthesis, i.e. the computation step, Fresnel transform based or point-source based raytracing methods can be applied. In the encoding step, the complex wave-field has to be optimally represented by the SLM with its given modulation capability. For proper hologram reconstruction that implies a simultaneous and independent amplitude and phase modulation of the input wave-field by the SLM. In this paper, we discuss full complex hologram representation methods on SLMs by considering inherent SLM parameter such as modulation type and bit depth on their reconstruction performance such as diffraction efficiency and SNR. We review the three implementation schemes of Burckhardt amplitude-only representation, phase-only macro-pixel representation, and two-phase interference representation. Besides the optical performance we address their hardware complexity and required computational load. Finally, we experimentally demonstrate holographic reconstructions of different representation schemes as obtained by functional prototypes utilizing SeeReal's viewing-window holographic display technology. The proposed hardware implementations enable a fast encoding of complex-valued hologram data and thus will pave the way for commercial real-time holographic 3D imaging in the near future.

  5. Improving representation-based classification for robust face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Hongzhi; Zhang, Zheng; Li, Zhengming; Chen, Yan; Shi, Jian

    2014-06-01

    The sparse representation classification (SRC) method proposed by Wright et al. is considered as the breakthrough of face recognition because of its good performance. Nevertheless it still cannot perfectly address the face recognition problem. The main reason for this is that variation of poses, facial expressions, and illuminations of the facial image can be rather severe and the number of available facial images are fewer than the dimensions of the facial image, so a certain linear combination of all the training samples is not able to fully represent the test sample. In this study, we proposed a novel framework to improve the representation-based classification (RBC). The framework first ran the sparse representation algorithm and determined the unavoidable deviation between the test sample and optimal linear combination of all the training samples in order to represent it. It then exploited the deviation and all the training samples to resolve the linear combination coefficients. Finally, the classification rule, the training samples, and the renewed linear combination coefficients were used to classify the test sample. Generally, the proposed framework can work for most RBC methods. From the viewpoint of regression analysis, the proposed framework has a solid theoretical soundness. Because it can, to an extent, identify the bias effect of the RBC method, it enables RBC to obtain more robust face recognition results. The experimental results on a variety of face databases demonstrated that the proposed framework can improve the collaborative representation classification, SRC, and improve the nearest neighbor classifier.

  6. 3D Exploration of Meteorological Data: Facing the challenges of operational forecasters

    NASA Astrophysics Data System (ADS)

    Koutek, Michal; Debie, Frans; van der Neut, Ian

    2016-04-01

    In the past years the Royal Netherlands Meteorological Institute (KNMI) has been working on innovation in the field of meteorological data visualization. We are dealing with Numerical Weather Prediction (NWP) model data and observational data, i.e. satellite images, precipitation radar, ground and air-borne measurements. These multidimensional multivariate data are geo-referenced and can be combined in 3D space to provide more intuitive views on the atmospheric phenomena. We developed the Weather3DeXplorer (W3DX), a visualization framework for processing and interactive exploration and visualization using Virtual Reality (VR) technology. We managed to have great successes with research studies on extreme weather situations. In this paper we will elaborate what we have learned from application of interactive 3D visualization in the operational weather room. We will explain how important it is to control the degrees-of-freedom during interaction that are given to the users: forecasters/scientists; (3D camera and 3D slicing-plane navigation appear to be rather difficult for the users, when not implemented properly). We will present a novel approach of operational 3D visualization user interfaces (UI) that for a great deal eliminates the obstacle and the time it usually takes to set up the visualization parameters and an appropriate camera view on a certain atmospheric phenomenon. We have found our inspiration in the way our operational forecasters work in the weather room. We decided to form a bridge between 2D visualization images and interactive 3D exploration. Our method combines WEB-based 2D UI's, pre-rendered 3D visualization catalog for the latest NWP model runs, with immediate entry into interactive 3D session for selected visualization setting. Finally, we would like to present the first user experiences with this approach.

  7. Representation of chemical information in OASIS centralized 3D database for existing chemicals.

    PubMed

    Nikolov, Nikolai; Grancharov, Vanio; Stoyanova, Galya; Pavlov, Todor; Mekenyan, Ovanes

    2006-01-01

    The present inventory of existing chemicals in regulatory agencies in North America and Europe, encompassing the chemicals of the European Chemicals Bureau (EINECS, with 61 573 discrete chemicals); the Danish EPA (159 448 chemicals); the U.S. EPA (TSCA, 56 882 chemicals; HPVC, 10 546 chemicals) and pesticides' active and inactive ingredients of the U.S. EPA (1379 chemicals); the Organization for Economic Cooperation and Development (HPVC, 4750 chemicals); Environment Canada (DSL, 10851 chemicals); and the Japanese Ministry of Economy, Trade, and Industry (16811), was combined in a centralized 3D database for existing chemicals. The total number of unique chemicals from all of these databases exceeded 185 500. Defined and undefined chemical mixtures and polymers are handled, along with discrete (hydrolyzing and nonhydrolyzing) chemicals. The database manager provides the storage and retrieval of chemical structures with 2D and 3D data, accounting for molecular flexibility by using representative sets of conformers for each chemical. The electronic and geometric structures of all conformers are quantum-chemically optimized and evaluated. Hence, the database contains over 3.7 million 3D records with hundreds of millions of descriptor data items at the levels of structures, conformers, or atoms. The platform contains a highly developed search subsystem--a search is possible on Chemical Abstracts Service numbers; names; 2D and 3D fragment searches; structural, conformational, or atomic properties; affiliation in other chemical databases; structure similarity; logical combinations; saved queries; and search result exports. Models (collections of logically related descriptors) are supported, including information on a model's author, date, bioassay, organs/tissues, conditions, administration, and so forth. Fragments can be interactively constructed using a visual structure editor. A configurable database browser is designed for the inspection and editing of all types of

  8. Representation of chemical information in OASIS centralized 3D database for existing chemicals.

    PubMed

    Nikolov, Nikolai; Grancharov, Vanio; Stoyanova, Galya; Pavlov, Todor; Mekenyan, Ovanes

    2006-01-01

    The present inventory of existing chemicals in regulatory agencies in North America and Europe, encompassing the chemicals of the European Chemicals Bureau (EINECS, with 61 573 discrete chemicals); the Danish EPA (159 448 chemicals); the U.S. EPA (TSCA, 56 882 chemicals; HPVC, 10 546 chemicals) and pesticides' active and inactive ingredients of the U.S. EPA (1379 chemicals); the Organization for Economic Cooperation and Development (HPVC, 4750 chemicals); Environment Canada (DSL, 10851 chemicals); and the Japanese Ministry of Economy, Trade, and Industry (16811), was combined in a centralized 3D database for existing chemicals. The total number of unique chemicals from all of these databases exceeded 185 500. Defined and undefined chemical mixtures and polymers are handled, along with discrete (hydrolyzing and nonhydrolyzing) chemicals. The database manager provides the storage and retrieval of chemical structures with 2D and 3D data, accounting for molecular flexibility by using representative sets of conformers for each chemical. The electronic and geometric structures of all conformers are quantum-chemically optimized and evaluated. Hence, the database contains over 3.7 million 3D records with hundreds of millions of descriptor data items at the levels of structures, conformers, or atoms. The platform contains a highly developed search subsystem--a search is possible on Chemical Abstracts Service numbers; names; 2D and 3D fragment searches; structural, conformational, or atomic properties; affiliation in other chemical databases; structure similarity; logical combinations; saved queries; and search result exports. Models (collections of logically related descriptors) are supported, including information on a model's author, date, bioassay, organs/tissues, conditions, administration, and so forth. Fragments can be interactively constructed using a visual structure editor. A configurable database browser is designed for the inspection and editing of all types of

  9. 3D polygonal representation of dense point clouds by triangulation, segmentation, and texture projection

    NASA Astrophysics Data System (ADS)

    Tajbakhsh, Touraj

    2010-02-01

    A basic concern of computer graphic is the modeling and realistic representation of three-dimensional objects. In this paper we present our reconstruction framework which determines a polygonal surface from a set of dense points such those typically obtained from laser scanners. We deploy the concept of adaptive blobs to achieve a first volumetric representation of the object. In the next step we estimate a coarse surface using the marching cubes method. We propose to deploy a depth-first search segmentation algorithm traversing a graph representation of the obtained polygonal mesh in order to identify all connected components. A so called supervised triangulation maps the coarse surfaces onto the dense point cloud. We optimize the mesh topology using edge exchange operations. For photo-realistic visualization of objects we finally synthesize optimal low-loss textures from available scene captures of different projections. We evaluate our framework on artificial data as well as real sensed data.

  10. The Representation of Cultural Heritage from Traditional Drawing to 3d Survey: the Case Study of Casamary's Abbey

    NASA Astrophysics Data System (ADS)

    Canciani, M.; Saccone, M.

    2016-06-01

    In 3D survey the aspects most discussed in the scientific community are those related to the acquisition of data from integrated survey (laser scanner, photogrammetric, topographic and traditional direct), rather than those relating to the interpretation of the data. Yet in the methods of traditional representation, the data interpretation, such as that of the philological reconstruction, constitutes the most important aspect. It is therefore essential in modern systems of survey and representation, filter the information acquired. In the system, based on the integrated survey that we have adopted, the 3D object, characterized by a cloud of georeferenced points, defined but their color values, defines the core of the elaboration. It allows to carry out targeted analysis, using section planes as a tool of selection and filtering data, comparable with those of traditional drawings. In the case study of the Abbey of Casamari (Veroli), one of the most important Cistercian Settlement in Italy, the survey made for an Agreement with the Ministry of Cultural Heritage and Activities and Tourism (MiBACT) and University of RomaTre, within the project "Accessment of the sismic safety of the state museum", the reference 3D model, consisting of the superposition and geo-references data from various surveys, is the tool with which yo develop representative models comparable to traditional ones. It provides the necessary spatial environment for drawing up plans and sections with a definition such as to develop thematic analysis related to phases of construction, state of deterioration and structural features.

  11. 3-D representation of aquitard topography using ground-penetrating radar

    SciTech Connect

    Young, R.A.; Sun, Jingsheng

    1995-12-31

    The topography of a clay aquitard is defined by 3D Ground Penetrating Radar (GPR) data at Hill Air Force Base, Utah. Conventional processing augmented by multichannel domain filtering shows a strong reflection from a depth of 20-30 ft despite attenuation by an artificial clay cap approximately 2 ft thick. This reflection correlates very closely with the top of the aquitard as seen in lithology logs at 3 wells crossed by common offset radar profiles from the 3D dataset. Lateral and vertical resolution along the boundary are approximately 2 ft and 1 ft, respectively. The boundary shows abrupt topographic variation of 5 ft over horizontal distances of 20 ft or less and is probably due to vigorous erosion by streams during lowstands of ancient Lake Bonneville. This irregular topography may provide depressions for accumulation of hydrocarbons and chlorinated organic pollutants. A ridge running the length of the survey area may channel movement of ground water and of hydrocarbons trapped at the surface of the water table. Depth slices through a 3D volume, and picked points along the aquitard displayed in depth and relative elevation perspectives provide much more useful visualization than several 2D lines by themselves. The three-dimensional CPR image provides far more detailed definition of geologic boundaries than does projection of soil boring logs into two-dimensional profiles.

  12. Statistical representation of high-dimensional deformation fields with application to statistically constrained 3D warping.

    PubMed

    Xue, Zhong; Shen, Dinggang; Davatzikos, Christos

    2006-10-01

    This paper proposes a 3D statistical model aiming at effectively capturing statistics of high-dimensional deformation fields and then uses this prior knowledge to constrain 3D image warping. The conventional statistical shape model methods, such as the active shape model (ASM), have been very successful in modeling shape variability. However, their accuracy and effectiveness typically drop dramatically in high-dimensionality problems involving relatively small training datasets, which is customary in 3D and 4D medical imaging applications. The proposed statistical model of deformation (SMD) uses wavelet-based decompositions coupled with PCA in each wavelet band, in order to more accurately estimate the pdf of high-dimensional deformation fields, when a relatively small number of training samples are available. SMD is further used as statistical prior to regularize the deformation field in an SMD-constrained deformable registration framework. As a result, more robust registration results are obtained relative to using generic smoothness constraints on deformation fields, such as Laplacian-based regularization. In experiments, we first illustrate the performance of SMD in representing the variability of deformation fields and then evaluate the performance of the SMD-constrained registration, via comparing a hierarchical volumetric image registration algorithm, HAMMER, with its SMD-constrained version, referred to as SMD+HAMMER. This SMD-constrained deformable registration framework can potentially incorporate various registration algorithms to improve robustness and stability via statistical shape constraints.

  13. Average Cross-Sectional Area of DebriSat Fragments Using Volumetrically Constructed 3D Representations

    NASA Technical Reports Server (NTRS)

    Scruggs, T.; Moraguez, M.; Patankar, K.; Fitz-Coy, N.; Liou, J.-C.; Sorge, M.; Huynh, T.

    2016-01-01

    Debris fragments from the hypervelocity impact testing of DebriSat are being collected and characterized for use in updating existing satellite breakup models. One of the key parameters utilized in these models is the ballistic coefficient of the fragment which is directly related to its area-to-mass ratio. However, since the attitude of fragments varies during their orbital lifetime, it is customary to use the average cross-sectional area in the calculation of the area-to-mass ratio. The average cross-sectional area is defined as the average of the projected surface areas perpendicular to the direction of motion and has been shown to be equal to one-fourth of the total surface area of a convex object. Unfortunately, numerous fragments obtained from the DebriSat experiment show significant concavity (i.e., shadowing) and thus we have explored alternate methods for computing the average cross-sectional area of the fragments. An imaging system based on the volumetric reconstruction of a 3D object from multiple 2D photographs of the object was developed for use in determining the size characteristic (i.e., characteristics length) of the DebriSat fragments. For each fragment, the imaging system generates N number of images from varied azimuth and elevation angles and processes them using a space-carving algorithm to construct a 3D point cloud of the fragment. This paper describes two approaches for calculating the average cross-sectional area of debris fragments based on the 3D imager. Approach A utilizes the constructed 3D object to generate equally distributed cross-sectional area projections and then averages them to determine the average cross-sectional area. Approach B utilizes a weighted average of the area of the 2D photographs to directly compute the average cross-sectional area. A comparison of the accuracy and computational needs of each approach is described as well as preliminary results of an analysis to determine the "optimal" number of images needed for

  14. 3D computer data capture and imaging applied to the face and jaws.

    PubMed

    Spencer, R; Hathaway, R; Speculand, B

    1996-02-01

    There have been few attempts in the past at 3D computer modelling of facial deformity because of the difficulties with generating accurate three-dimensional data and subsequent image regeneration and manipulation. We report the application of computer aided engineering techniques to the study of jaw deformity. The construction of a 3D image of the mandible using a Ferranti co-ordinate measuring machine for data capture and the 'DUCT5' surface modelling programme for image regeneration is described. The potential application of this work will be discussed. PMID:8645664

  15. Generating virtual training samples for sparse representation of face images and face recognition

    NASA Astrophysics Data System (ADS)

    Du, Yong; Wang, Yu

    2016-03-01

    There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.

  16. Is principal component analysis an effective tool to predict face attractiveness? A contribution based on real 3D faces of highly selected attractive women, scanned with stereophotogrammetry.

    PubMed

    Galantucci, Luigi Maria; Di Gioia, Eliana; Lavecchia, Fulvio; Percoco, Gianluca

    2014-05-01

    In the literature, several papers report studies on mathematical models used to describe facial features and to predict female facial beauty based on 3D human face data. Many authors have proposed the principal component analysis (PCA) method that permits modeling of the entire human face using a limited number of parameters. In some cases, these models have been correlated with beauty classifications, obtaining good attractiveness predictability using wrapped 2D or 3D models. To verify these results, in this paper, the authors conducted a three-dimensional digitization study of 66 very attractive female subjects using a computerized noninvasive tool known as 3D digital photogrammetry. The sample consisted of the 64 contestants of the final phase of the Miss Italy 2010 beauty contest, plus the two highest ranked contestants in the 2009 competition. PCA was conducted on this real faces sample to verify if there is a correlation between ranking and the principal components of the face models. There was no correlation and therefore, this hypothesis is not confirmed for our sample. Considering that the results of the contest are not only solely a function of facial attractiveness, but undoubtedly are significantly impacted by it, the authors based on their experience and real faces conclude that PCA analysis is not a valid prediction tool for attractiveness. The database of the features belonging to the sample analyzed are downloadable online and further contributions are welcome. PMID:24728666

  17. Primary face motor area as the motor representation of articulation.

    PubMed

    Terao, Yasuo; Ugawa, Yoshikazu; Yamamoto, Tomotaka; Sakurai, Yasuhisa; Masumoto, Tomohiko; Abe, Osamu; Masutani, Yoshitaka; Aoki, Shigeki; Tsuji, Shoji

    2007-04-01

    No clinical data have yet been presented to show that a lesion localized to the primary motor area (M1) can cause severe transient impairment of articulation, although a motor representation for articulation has been suggested to exist within M1. Here we describe three cases of patients who developed severe dysarthria, temporarily mimicking speech arrest or aphemia, due to a localized brain lesion near the left face representation of the human primary motor cortex (face-M1). Speech was slow, effortful, lacking normal prosody, and more affected than expected from the degree of facial or tongue palsy. There was a mild deficit in tongue movements in the sagittal plane that impaired palatolingual contact and rapid tongue movements. The speech disturbance was limited to verbal output, without aphasia or orofacial apraxia. Overlay of magnetic resonance images revealed a localized cortical region near face-M1, which displayed high intensity on diffusion weighted images, while the main portion of the corticobulbar fibers arising from the lower third of the motor cortex was preserved. The cases suggest the existence of a localized brain region specialized for articulation near face-M1. Cortico-cortical fibers connecting face-M1 with the lower premotor areas including Broca's area may also be important for articulatory control. PMID:17380243

  18. A 3D sequence-independent representation of the protein data bank.

    PubMed

    Fischer, D; Tsai, C J; Nussinov, R; Wolfson, H

    1995-10-01

    Here we address the following questions. How many structurally different entries are there in the Protein Data Bank (PDB)? How do the proteins populate the structural universe? To investigate these questions a structurally non-redundant set of representative entries was selected from the PDB. Construction of such a dataset is not trivial: (i) the considerable size of the PDB requires a large number of comparisons (there were more than 3250 structures of protein chains available in May 1994); (ii) the PDB is highly redundant, containing many structurally similar entries, not necessarily with significant sequence homology, and (iii) there is no clear-cut definition of structural similarity. The latter depend on the criteria and methods used. Here, we analyze structural similarity ignoring protein topology. To date, representative sets have been selected either by hand, by sequence comparison techniques which ignore the three-dimensional (3D) structures of the proteins or by using sequence comparisons followed by linear structural comparison (i.e. the topology, or the sequential order of the chains, is enforced in the structural comparison). Here we describe a 3D sequence-independent automated and efficient method to obtain a representative set of protein molecules from the PDB which contains all unique structures and which is structurally non-redundant. The method has two novel features. The first is the use of strictly structural criteria in the selection process without taking into account the sequence information. To this end we employ a fast structural comparison algorithm which requires on average approximately 2 s per pairwise comparison on a workstation. The second novel feature is the iterative application of a heuristic clustering algorithm that greatly reduces the number of comparisons required. We obtain a representative set of 220 chains with resolution better than 3.0 A, or 268 chains including lower resolution entries, NMR entries and models. The

  19. A 3D sequence-independent representation of the protein data bank.

    PubMed

    Fischer, D; Tsai, C J; Nussinov, R; Wolfson, H

    1995-10-01

    Here we address the following questions. How many structurally different entries are there in the Protein Data Bank (PDB)? How do the proteins populate the structural universe? To investigate these questions a structurally non-redundant set of representative entries was selected from the PDB. Construction of such a dataset is not trivial: (i) the considerable size of the PDB requires a large number of comparisons (there were more than 3250 structures of protein chains available in May 1994); (ii) the PDB is highly redundant, containing many structurally similar entries, not necessarily with significant sequence homology, and (iii) there is no clear-cut definition of structural similarity. The latter depend on the criteria and methods used. Here, we analyze structural similarity ignoring protein topology. To date, representative sets have been selected either by hand, by sequence comparison techniques which ignore the three-dimensional (3D) structures of the proteins or by using sequence comparisons followed by linear structural comparison (i.e. the topology, or the sequential order of the chains, is enforced in the structural comparison). Here we describe a 3D sequence-independent automated and efficient method to obtain a representative set of protein molecules from the PDB which contains all unique structures and which is structurally non-redundant. The method has two novel features. The first is the use of strictly structural criteria in the selection process without taking into account the sequence information. To this end we employ a fast structural comparison algorithm which requires on average approximately 2 s per pairwise comparison on a workstation. The second novel feature is the iterative application of a heuristic clustering algorithm that greatly reduces the number of comparisons required. We obtain a representative set of 220 chains with resolution better than 3.0 A, or 268 chains including lower resolution entries, NMR entries and models. The

  20. Cognitive/emotional models for human behavior representation in 3D avatar simulations

    NASA Astrophysics Data System (ADS)

    Peterson, James K.

    2004-08-01

    Simplified models of human cognition and emotional response are presented which are based on models of auditory/ visual polymodal fusion. At the core of these models is a computational model of Area 37 of the temporal cortex which is based on new isocortex models presented recently by Grossberg. These models are trained using carefully chosen auditory (musical sequences), visual (paintings) and higher level abstract (meta level) data obtained from studies of how optimization strategies are chosen in response to outside managerial inputs. The software modules developed are then used as inputs to character generation codes in standard 3D virtual world simulations. The auditory and visual training data also enable the development of simple music and painting composition generators which significantly enhance one's ability to validate the cognitive model. The cognitive models are handled as interacting software agents implemented as CORBA objects to allow the use of multiple language coding choices (C++, Java, Python etc) and efficient use of legacy code.

  1. Status of the phenomena representation, 3D modeling, and cloud-based software architecture development

    SciTech Connect

    Smith, Curtis L.; Prescott, Steven; Kvarfordt, Kellie; Sampath, Ram; Larson, Katie

    2015-09-01

    Early in 2013, researchers at the Idaho National Laboratory outlined a technical framework to support the implementation of state-of-the-art probabilistic risk assessment to predict the safety performance of advanced small modular reactors. From that vision of the advanced framework for risk analysis, specific tasks have been underway in order to implement the framework. This report discusses the current development of a several tasks related to the framework implementation, including a discussion of a 3D physics engine that represents the motion of objects (including collision and debris modeling), cloud-based analysis tools such as a Bayesian-inference engine, and scenario simulations. These tasks were performed during 2015 as part of the technical work associated with the Advanced Reactor Technologies Program.

  2. The virtual human face: superimposing the simultaneously captured 3D photorealistic skin surface of the face on the untextured skin image of the CBCT scan.

    PubMed

    Naudi, K B; Benramadan, R; Brocklebank, L; Ju, X; Khambay, B; Ayoub, A

    2013-03-01

    The aim of this study was to evaluate the impact of simultaneous capture of the three-dimensional (3D) surface of the face and cone beam computed tomography (CBCT) scan of the skull on the accuracy of their registration and superimposition. 3D facial images were acquired in 14 patients using the Di3d (Dimensional Imaging, UK) imaging system and i-CAT CBCT scanner. One stereophotogrammetry image was captured at the same time as the CBCT and another 1h later. The two stereophotographs were individually superimposed over the CBCT using VRmesh. Seven patches were isolated on the final merged surfaces. For the whole face and each individual patch: maximum and minimum range of deviation between surfaces; absolute average distance between surfaces; and standard deviation for the 90th percentile of the distance errors were calculated. The superimposition errors of the whole face for both captures revealed statistically significant differences (P=0.00081). The absolute average distances in both separate and simultaneous captures were 0.47 and 0.27mm, respectively. The level of superimposition accuracy in patches from separate captures was 0.3-0.9mm, while that of simultaneous captures was 0.4mm. Simultaneous capture of Di3d and CBCT images significantly improved the accuracy of superimposition of these image modalities.

  3. Elaine Halley: 'the future - 3D planning but with the face in motion'.

    PubMed

    2015-03-01

    Ahead of her presentation at the 2015 British Dental Conference and Exhibition, the BDJ caught up with Elaine Halley, to find out more about the changing face of cosmetic dentistry, Digital Smile Design and the vision behind her dental spa.

  4. SEISVIZ3D: Stereoscopic system for the representation of seismic data - Interpretation and Immersion

    NASA Astrophysics Data System (ADS)

    von Hartmann, Hartwig; Rilling, Stefan; Bogen, Manfred; Thomas, Rüdiger

    2015-04-01

    The seismic method is a valuable tool for getting 3D-images from the subsurface. Seismic data acquisition today is not only a topic for oil and gas exploration but is used also for geothermal exploration, inspections of nuclear waste sites and for scientific investigations. The system presented in this contribution may also have an impact on the visualization of 3D-data of other geophysical methods. 3D-seismic data can be displayed in different ways to give a spatial impression of the subsurface.They are a combination of individual vertical cuts, possibly linked to a cubical portion of the data volume, and the stereoscopic view of the seismic data. By these methods, the spatial perception for the structures and thus of the processes in the subsurface should be increased. Stereoscopic techniques are e. g. implemented in the CAVE and the WALL, both of which require a lot of space and high technical effort. The aim of the interpretation system shown here is stereoscopic visualization of seismic data at the workplace, i.e. at the personal workstation and monitor. The system was developed with following criteria in mind: • Fast rendering of large amounts of data so that a continuous view of the data when changing the viewing angle and the data section is possible, • defining areas in stereoscopic view to translate the spatial impression directly into an interpretation, • the development of an appropriate user interface, including head-tracking, for handling the increased degrees of freedom, • the possibility of collaboration, i.e. teamwork and idea exchange with the simultaneous viewing of a scene at remote locations. The possibilities offered by the use of a stereoscopic system do not replace a conventional interpretation workflow. Rather they have to be implemented into it as an additional step. The amplitude distribution of the seismic data is a challenge for the stereoscopic display because the opacity level and the scaling and selection of the data have to

  5. A Dynamic 3D Graphical Representation for RNA Structure Analysis and Its Application in Non-Coding RNA Classification

    PubMed Central

    Dong, Xiaoqing; Fang, Yiliang; Wang, Kejing; Zhu, Lijuan; Wang, Ke; Huang, Tao

    2016-01-01

    With the development of new technologies in transcriptome and epigenetics, RNAs have been identified to play more and more important roles in life processes. Consequently, various methods have been proposed to assess the biological functions of RNAs and thus classify them functionally, among which comparative study of RNA structures is perhaps the most important one. To measure the structural similarity of RNAs and classify them, we propose a novel three dimensional (3D) graphical representation of RNA secondary structure, in which an RNA secondary structure is first transformed into a characteristic sequence based on chemical property of nucleic acids; a dynamic 3D graph is then constructed for the characteristic sequence; and lastly a numerical characterization of the 3D graph is used to represent the RNA secondary structure. We tested our algorithm on three datasets: (1) Dataset I consisting of nine RNA secondary structures of viruses, (2) Dataset II consisting of complex RNA secondary structures including pseudo-knots, and (3) Dataset III consisting of 18 non-coding RNA families. We also compare our method with other nine existing methods using Dataset II and III. The results demonstrate that our method is better than other methods in similarity measurement and classification of RNA secondary structures. PMID:27213271

  6. Elaine Halley: 'the future - 3D planning but with the face in motion'.

    PubMed

    2015-03-01

    Ahead of her presentation at the 2015 British Dental Conference and Exhibition, the BDJ caught up with Elaine Halley, to find out more about the changing face of cosmetic dentistry, Digital Smile Design and the vision behind her dental spa. PMID:25812879

  7. Possible use of small UAV to create high resolution 3D model of vertical rock faces

    NASA Astrophysics Data System (ADS)

    Mészáros, János; Kerkovits, Krisztian

    2014-05-01

    One of the newest and mostly emerging acquisition technologies is the use of small unmanned aerial vehicles (UAVs) to photogrammetry and remote sensing. Several successful research project or industrial use can be found worldwide (mine investigation, precision agriculture, mapping etc.) but those surveys are focusing mainly on the survey of horizontal areas. In our research a mixed acquisition method was developed and tested to create a dense, 3D model about a columnar outcrop close to Kő-hegy (Pest County). Our primary goal was to create a model whereat the pattern of different layers is clearly visible and measurable, as well as to test the robustness of our idea. Our method uses a consumer grade camera to take digital photographs about the outcrop. A small, custom made tricopter was built to carry the camera above middle and top parts of the rock, the bottom part can be photographed only from several ground positions. During the field survey ground control points were installed and measured using a kinematic correction GPS. These latter data were used during the georeferencing of generated point cloud. Free online services built on Structure from Motion (SfM) algorithms and desktop software also were tested to generate the relative point cloud and for further processing and analysis.

  8. Sparse representation based face recognition using weighted regions

    NASA Astrophysics Data System (ADS)

    Bilgazyev, Emil; Yeniaras, E.; Uyanik, I.; Unan, Mahmut; Leiss, E. L.

    2013-12-01

    Face recognition is a challenging research topic, especially when the training (gallery) and recognition (probe) images are acquired using different cameras under varying conditions. Even a small noise or occlusion in the images can compromise the accuracy of recognition. Lately, sparse encoding based classification algorithms gave promising results for such uncontrollable scenarios. In this paper, we introduce a novel methodology by modeling the sparse encoding with weighted patches to increase the robustness of face recognition even further. In the training phase, we define a mask (i.e., weight matrix) using a sparse representation selecting the facial regions, and in the recognition phase, we perform comparison on selected facial regions. The algorithm was evaluated both quantitatively and qualitatively using two comprehensive surveillance facial image databases, i.e., SCfaceandMFPV, with the results clearly superior to common state-of-the-art methodologies in different scenarios.

  9. Exact asymptotic statistics of the n-edged face in a 3D Poisson-Voronoi tessellation

    NASA Astrophysics Data System (ADS)

    Hilhorst, H. J.

    2016-05-01

    This work considers the 3D Poisson-Voronoi tessellation. It investigates the joint probability distribution {πn}(L) for an arbitrarily selected cell face to be n-edged and for the distance between the seeds of the two adjacent cells to be equal to 2L. For this quantity an exact expression is derived, valid in the limit n\\to ∞ with n 1/6 L fixed. The leading order correction term is determined. Good agreement with earlier Monte Carlo data is obtained. The cell face is shown to be surrounded by a three-dimensional domain that is empty of seeds and is the union of n balls; it is pumpkin-shaped and analogous to the flower of the 2D Voronoi cell. For n\\to ∞ this domain tends towards a torus of equal major and minor radii. The radii scale as n 1/3, in agreement with earlier heuristic work. A detailed understanding is achieved of several other statistical properties of the n-edged cell face.

  10. Magnetic fields end-face effect investigation of HTS bulk over PMG with 3D-modeling numerical method

    NASA Astrophysics Data System (ADS)

    Qin, Yujie; Lu, Yiyun

    2015-09-01

    In this paper, the magnetic fields end-face effect of high temperature superconducting (HTS) bulk over a permanent magnetic guideway (PMG) is researched with 3D-modeling numerical method. The electromagnetic behavior of the bulk is simulated using finite element method (FEM). The framework is formulated by the magnetic field vector method (H-method). A superconducting levitation system composed of one rectangular HTS bulk and one infinite long PMG is successfully investigated using the proposed method. The simulation results show that for finite geometrical HTS bulk, even the applied magnetic field is only distributed in x-y plane, the magnetic field component Hz which is along the z-axis can be observed interior the HTS bulk.

  11. Evaluating the Effectiveness of Organic Chemistry Textbooks in Promoting Representational Fluency and Understanding of 2D-3D Diagrammatic Relationships

    ERIC Educational Resources Information Center

    Kumi, Bryna C.; Olimpo, Jeffrey T.; Bartlett, Felicia; Dixon, Bonnie L.

    2013-01-01

    The use of two-dimensional (2D) representations to communicate and reason about micromolecular phenomena is common practice in chemistry. While experts are adept at using such representations, research suggests that novices often exhibit great difficulty in understanding, manipulating, and translating between various representational forms. When…

  12. Insertion of 3-D-primitives in mesh-based representations: towards compact models preserving the details.

    PubMed

    Lafarge, Florent; Keriven, Renaud; Brédif, Mathieu

    2010-07-01

    We propose an original hybrid modeling process of urban scenes that represents 3-D models as a combination of mesh-based surfaces and geometric 3-D-primitives. Meshes describe details such as ornaments and statues, whereas 3-D-primitives code for regular shapes such as walls and columns. Starting from an 3-D-surface obtained by multiview stereo techniques, these primitives are inserted into the surface after being detected. This strategy allows the introduction of semantic knowledge, the simplification of the modeling, and even correction of errors generated by the acquisition process. We design a hierarchical approach exploring different scales of an observed scene. Each level consists first in segmenting the surface using a multilabel energy model optimized by -expansion and then in fitting 3-D-primitives such as planes, cylinders or tori on the obtained partition where relevant. Experiments on real meshes, depth maps and synthetic surfaces show good potential for the proposed approach.

  13. Three-dimensional information in face representations revealed by identity aftereffects.

    PubMed

    Jiang, Fang; Blanz, Volker; O'Toole, Alice J

    2009-03-01

    Representations of individual faces evolve with experience to support progressively more robust recognition. Knowledge of three-dimensional face structure is required to predict an image of a face as illumination and viewpoint change. Robust recognition across such transformations can be achieved with representations based on multiple two-dimensional views, three-dimensional structure, or both. We used face-identity adaptation in a familiarization paradigm to address a long-standing controversy concerning the role of two-dimensional versus three-dimensional information in face representations. We reasoned that if three-dimensional information is coded in the representations of familiar faces, then learning a new face using images generated by one three-dimensional transformation should enhance the robustness of the representation to another type of three-dimensional transformation. Familiarization with multiple views of faces enhanced the transfer of face-identity adaptation effects across changes in illumination by compensating for a generalization cost at a novel test viewpoint. This finding demonstrates a role for three-dimensional information in representations of familiar faces.

  14. The Role of Familiarity for Representations in Norm-Based Face Space

    PubMed Central

    Faerber, Stella J.; Kaufmann, Jürgen M.; Leder, Helmut; Martin, Eva Maria; Schweinberger, Stefan R.

    2016-01-01

    According to the norm-based version of the multidimensional face space model (nMDFS, Valentine, 1991), any given face and its corresponding anti-face (which deviates from the norm in exactly opposite direction as the original face) should be equidistant to a hypothetical prototype face (norm), such that by definition face and anti-face should bear the same level of perceived typicality. However, it has been argued that familiarity affects perceived typicality and that representations of familiar faces are qualitatively different (e.g., more robust and image-independent) from those for unfamiliar faces. Here we investigated the role of face familiarity for rated typicality, using two frequently used operationalisations of typicality (deviation-based: DEV), and distinctiveness (face in the crowd: FITC) for faces of celebrities and their corresponding anti-faces. We further assessed attractiveness, likeability and trustworthiness ratings of the stimuli, which are potentially related to typicality. For unfamiliar faces and their corresponding anti-faces, in line with the predictions of the nMDFS, our results demonstrate comparable levels of perceived typicality (DEV). In contrast, familiar faces were perceived much less typical than their anti-faces. Furthermore, familiar faces were rated higher than their anti-faces in distinctiveness, attractiveness, likability and trustworthiness. These findings suggest that familiarity strongly affects the distribution of facial representations in norm-based face space. Overall, our study suggests (1) that familiarity needs to be considered in studies of mental representations of faces, and (2) that familiarity, general distance-to-norm and more specific vector directions in face space make different and interactive contributions to different types of facial evaluations. PMID:27168323

  15. The Role of Familiarity for Representations in Norm-Based Face Space.

    PubMed

    Faerber, Stella J; Kaufmann, Jürgen M; Leder, Helmut; Martin, Eva Maria; Schweinberger, Stefan R

    2016-01-01

    According to the norm-based version of the multidimensional face space model (nMDFS, Valentine, 1991), any given face and its corresponding anti-face (which deviates from the norm in exactly opposite direction as the original face) should be equidistant to a hypothetical prototype face (norm), such that by definition face and anti-face should bear the same level of perceived typicality. However, it has been argued that familiarity affects perceived typicality and that representations of familiar faces are qualitatively different (e.g., more robust and image-independent) from those for unfamiliar faces. Here we investigated the role of face familiarity for rated typicality, using two frequently used operationalisations of typicality (deviation-based: DEV), and distinctiveness (face in the crowd: FITC) for faces of celebrities and their corresponding anti-faces. We further assessed attractiveness, likeability and trustworthiness ratings of the stimuli, which are potentially related to typicality. For unfamiliar faces and their corresponding anti-faces, in line with the predictions of the nMDFS, our results demonstrate comparable levels of perceived typicality (DEV). In contrast, familiar faces were perceived much less typical than their anti-faces. Furthermore, familiar faces were rated higher than their anti-faces in distinctiveness, attractiveness, likability and trustworthiness. These findings suggest that familiarity strongly affects the distribution of facial representations in norm-based face space. Overall, our study suggests (1) that familiarity needs to be considered in studies of mental representations of faces, and (2) that familiarity, general distance-to-norm and more specific vector directions in face space make different and interactive contributions to different types of facial evaluations. PMID:27168323

  16. Children's Face Identity Representations Are No More View Specific than Those of Adults

    ERIC Educational Resources Information Center

    Jeffery, Linda; Rathbone, Cameron; Read, Ainsley; Rhodes, Gillian

    2013-01-01

    Face recognition performance improves during childhood, not reaching adult levels until late adolescence, yet the source of this improvement is unclear. Recognition of faces across changes in viewpoint appears particularly slow to develop. Poor cross-view recognition suggests that children's face representations may be more view specific than…

  17. A computational model that recovers the 3D shape of an object from a single 2D retinal representation.

    PubMed

    Li, Yunfeng; Pizlo, Zygmunt; Steinman, Robert M

    2009-05-01

    Human beings perceive 3D shapes veridically, but the underlying mechanisms remain unknown. The problem of producing veridical shape percepts is computationally difficult because the 3D shapes have to be recovered from 2D retinal images. This paper describes a new model, based on a regularization approach, that does this very well. It uses a new simplicity principle composed of four shape constraints: viz., symmetry, planarity, maximum compactness and minimum surface. Maximum compactness and minimum surface have never been used before. The model was tested with random symmetrical polyhedra. It recovered their 3D shapes from a single randomly-chosen 2D image. Neither learning, nor depth perception, was required. The effectiveness of the maximum compactness and the minimum surface constraints were measured by how well the aspect ratio of the 3D shapes was recovered. These constraints were effective; they recovered the aspect ratio of the 3D shapes very well. Aspect ratios recovered by the model were compared to aspect ratios adjusted by four human observers. They also adjusted aspect ratios very well. In those rare cases, in which the human observers showed large errors in adjusted aspect ratios, their errors were very similar to the errors made by the model. PMID:18621410

  18. 3D front face solid-phase fluorescence spectroscopy combined with Independent Components Analysis to characterize organic matter in model soils.

    PubMed

    Ammari, Faten; Bendoula, Ryad; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N; Roger, Jean-Michel

    2014-07-01

    Soil organic matter (SOM) is a very complex and heterogeneous system which complicates its characterization. In fact, the methods classically used to characterize SOM are time- and solvent-consuming and insufficiently informative. The aim of this work is to study the potential of 3D solid-phase front face fluorescence (3D-SPFFF) spectroscopy to quickly provide a relevant and objective characterization of SOM as an alternative to the existing methods. Different soil models were prepared to simulate natural soil composition and were analyzed by 3D front-face fluorescence spectroscopy without prior preparation. The spectra were then treated using Independent Components Analysis. In this way, different organic molecules such as cellulose, proteins and amino acids used in the soil models were identified. The results of this study clearly indicate that 3D-SPFFF spectroscopy could be an easy, reliable and practical analytical method that could be an alternative to the classical methods in order to study SOM. The use of solid samples revealed some interactions that may occur in natural soils (self-quenching in the case of cellulose) and gave more accurate fluorescence signals for different components of the analyzed soil models. Independent Components Analysis (ICA) has demonstrated its power to extract the most informative signals and thus facilitate the interpretation of the complex 3D fluorescence data. PMID:24840426

  19. External facial features modify the representation of internal facial features in the fusiform face area.

    PubMed

    Axelrod, Vadim; Yovel, Galit

    2010-08-15

    Most studies of face identity have excluded external facial features by either removing them or covering them with a hat. However, external facial features may modify the representation of internal facial features. Here we assessed whether the representation of face identity in the fusiform face area (FFA), which has been primarily studied for internal facial features, is modified by differences in external facial features. We presented faces in which external and internal facial features were manipulated independently. Our findings show that the FFA was sensitive to differences in external facial features, but this effect was significantly larger when the external and internal features were aligned than misaligned. We conclude that the FFA generates a holistic representation in which the internal and the external facial features are integrated. These results indicate that to better understand real-life face recognition both external and internal features should be included.

  20. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    PubMed

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC. PMID:25419662

  1. Locality Constrained Joint Dynamic Sparse Representation for Local Matching Based Face Recognition

    PubMed Central

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC. PMID:25419662

  2. Learning Deep Representation for Face Alignment with Auxiliary Attributes.

    PubMed

    Zhang, Zhanpeng; Luo, Ping; Loy, Chen Change; Tang, Xiaoou

    2016-05-01

    In this study, we show that landmark detection or face alignment task is not a single and independent problem. Instead, its robustness can be greatly improved with auxiliary information. Specifically, we jointly optimize landmark detection together with the recognition of heterogeneous but subtly correlated facial attributes, such as gender, expression, and appearance attributes. This is non-trivial since different attribute inference tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, which not only learns the inter-task correlation but also employs dynamic task coefficients to facilitate the optimization convergence when learning multiple complex tasks. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing face alignment methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art methods based on cascaded deep model. PMID:27046839

  3. Learning Deep Representation for Face Alignment with Auxiliary Attributes.

    PubMed

    Zhang, Zhanpeng; Luo, Ping; Loy, Chen Change; Tang, Xiaoou

    2016-05-01

    In this study, we show that landmark detection or face alignment task is not a single and independent problem. Instead, its robustness can be greatly improved with auxiliary information. Specifically, we jointly optimize landmark detection together with the recognition of heterogeneous but subtly correlated facial attributes, such as gender, expression, and appearance attributes. This is non-trivial since different attribute inference tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, which not only learns the inter-task correlation but also employs dynamic task coefficients to facilitate the optimization convergence when learning multiple complex tasks. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing face alignment methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art methods based on cascaded deep model.

  4. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  5. Neural Representations of Personally Familiar and Unfamiliar Faces in the Anterior Inferior Temporal Cortex of Monkeys

    PubMed Central

    Eifuku, Satoshi; De Souza, Wania C.; Nakata, Ryuzaburo; Ono, Taketoshi; Tamura, Ryoi

    2011-01-01

    To investigate the neural representations of faces in primates, particularly in relation to their personal familiarity or unfamiliarity, neuronal activities were chronically recorded from the ventral portion of the anterior inferior temporal cortex (AITv) of macaque monkeys during the performance of a facial identification task using either personally familiar or unfamiliar faces as stimuli. By calculating the correlation coefficients between neuronal responses to the faces for all possible pairs of faces given in the task and then using the coefficients as neuronal population-based similarity measures between the faces in pairs, we analyzed the similarity/dissimilarity relationship between the faces, which were potentially represented by the activities of a population of the face-responsive neurons recorded in the area AITv. The results showed that, for personally familiar faces, different identities were represented by different patterns of activities of the population of AITv neurons irrespective of the view (e.g., front, 90° left, etc.), while different views were not represented independently of their facial identities, which was consistent with our previous report. In the case of personally unfamiliar faces, the faces possessing different identities but presented in the same frontal view were represented as similar, which contrasts with the results for personally familiar faces. These results, taken together, outline the neuronal representations of personally familiar and unfamiliar faces in the AITv neuronal population. PMID:21526206

  6. Representation of protein 3D structures in spherical (ρ, ϕ, θ) coordinates and two of its potential applications.

    PubMed

    Reyes, Vicente M

    2011-09-01

    Three-dimensional objects can be represented using cartesian, spherical or cylindrical coordinate systems, among many others. Currently all protein 3D structures in the PDB are in cartesian coordinates. We wanted to explore the possibility that protein 3D structures, especially the globular type (spheroproteins), when represented in spherical coordinates might find useful novel applications. A Fortran program was written to transform protein 3D structure files in cartesian coordinates (x,y,z) to spherical coordinates (ρ, ϕ, θ), with the centroid of the protein molecule as origin. We present here two applications, namely, (1) separation of the protein outer layer (OL) from the inner core (IC); and (2) identifying protrusions and invaginations on the protein surface. In the first application, ϕ and θ were partitioned into suitable intervals and the point with maximum ρ in each such 'ϕ-θ bin' was determined. A suitable cutoff value for ρ is adopted, and for each ϕ-θ bin, all points with ρ values less than the cutoff are considered part of the IC, and those with ρ values equal to or greater than the cutoff are considered part of the OL. We show that this separation procedure is successful as it gives rise to an OL that is significantly more enriched in hydrophilic amino acid residues, and an IC that is significantly more enriched in hydrophobic amino acid residues, as expected. In the second application, the point with maximum ρ in each ϕ-θ bin are sequestered and their frequency distribution constructed (i.e., maximum ρ's sorted from lowest to highest, collected into 1.50Å-intervals, and the frequency in each interval plotted). We show in such plots that invaginations on the protein surface give rise to subpeaks or shoulders on the lagging side of the main peak, while protrusions give rise to similar subpeaks or shoulders, but on the leading side of the main peak. We used the dataset of Laskowski et al. (1996) to demonstrate both applications.

  7. Representation of protein 3D structures in spherical (ρ, ϕ, θ) coordinates and two of its potential applications.

    PubMed

    Reyes, Vicente M

    2011-09-01

    Three-dimensional objects can be represented using cartesian, spherical or cylindrical coordinate systems, among many others. Currently all protein 3D structures in the PDB are in cartesian coordinates. We wanted to explore the possibility that protein 3D structures, especially the globular type (spheroproteins), when represented in spherical coordinates might find useful novel applications. A Fortran program was written to transform protein 3D structure files in cartesian coordinates (x,y,z) to spherical coordinates (ρ, ϕ, θ), with the centroid of the protein molecule as origin. We present here two applications, namely, (1) separation of the protein outer layer (OL) from the inner core (IC); and (2) identifying protrusions and invaginations on the protein surface. In the first application, ϕ and θ were partitioned into suitable intervals and the point with maximum ρ in each such 'ϕ-θ bin' was determined. A suitable cutoff value for ρ is adopted, and for each ϕ-θ bin, all points with ρ values less than the cutoff are considered part of the IC, and those with ρ values equal to or greater than the cutoff are considered part of the OL. We show that this separation procedure is successful as it gives rise to an OL that is significantly more enriched in hydrophilic amino acid residues, and an IC that is significantly more enriched in hydrophobic amino acid residues, as expected. In the second application, the point with maximum ρ in each ϕ-θ bin are sequestered and their frequency distribution constructed (i.e., maximum ρ's sorted from lowest to highest, collected into 1.50Å-intervals, and the frequency in each interval plotted). We show in such plots that invaginations on the protein surface give rise to subpeaks or shoulders on the lagging side of the main peak, while protrusions give rise to similar subpeaks or shoulders, but on the leading side of the main peak. We used the dataset of Laskowski et al. (1996) to demonstrate both applications. PMID

  8. Identity-level representations affect unfamiliar face matching performance in sequential but not simultaneous tasks.

    PubMed

    Menon, Nadia; White, David; Kemp, Richard I

    2015-01-01

    According to cognitive and neurological models of the face-processing system, faces are represented at two levels of abstraction. First, image-based pictorial representations code a particular instance of a face and include information that is unrelated to identity-such as lighting, pose, and expression. Second, at a more abstract level, identity-specific representations combine information from various encounters with a single face. Here we tested whether identity-level representations mediate unfamiliar face matching performance. Across three experiments we manipulated identity attributions to pairs of target images and measured the effect on subsequent identification decisions. Participants were instructed that target images were either two photos of the same person (1ID condition) or photos of two different people (2ID condition). This manipulation consistently affected performance in sequential matching: 1ID instructions improved accuracy on "match" trials and caused participants to adopt a more liberal response bias than the 2ID condition. However, this manipulation did not affect performance in simultaneous matching. We conclude that identity-level representations, generated in working memory, influence the amount of variation tolerated between images, when making identity judgements in sequential face matching. PMID:25686094

  9. A novel 3D algorithm for VLSI floorplanning

    NASA Astrophysics Data System (ADS)

    Rani, D. Gracia N.; Rajaram, S.; Sudarasan, Athira

    2013-01-01

    3-D VLSI circuit is becoming a hot issue because of its potential of enhancing performance, while it is also facing challenges such as the increased complexity on floorplanning and placement in VLSI Physical design. Efficient 3-D floorplan representations are needed to handle the placement optimization in new circuit designs. We analyze and categorize some state-of-the-art 3-D representations, and propose a Ternary tree model for 3-D nonslicing floorplans, extending the B*tree from 2D.This paper proposes a novel optimization algorithm for packing of 3D rectangular blocks. The new techniques considered are Differential evolutionary algorithm (DE) is very fast in that it evaluates the feasibility of a Ternary tree representation. Experimental results based on MCNC benchmark with constraints show that our proposed Differential Evolutionary (DE) can quickly produce optimal solutions.

  10. Identity-Specific Face Adaptation Effects: Evidence for Abstractive Face Representations

    ERIC Educational Resources Information Center

    Hole, Graham

    2011-01-01

    The effects of selective adaptation on familiar face perception were examined. After prolonged exposure to photographs of a celebrity, participants saw a series of ambiguous morphs that were varying mixtures between the face of that person and a different celebrity. Participants judged fewer of the morphs to resemble the celebrity to which they…

  11. Application of a roughness-length representation to parameterize energy loss in 3-D numerical simulations of large rivers

    NASA Astrophysics Data System (ADS)

    Sandbach, S. D.; Lane, S. N.; Hardy, R. J.; Amsler, M. L.; Ashworth, P. J.; Best, J. L.; Nicholas, A. P.; Orfeo, O.; Parsons, D. R.; Reesink, A. J. H.; Szupiany, R. N.

    2012-12-01

    Recent technological advances in remote sensing have enabled investigation of the morphodynamics and hydrodynamics of large rivers. However, measuring topography and flow in these very large rivers is time consuming and thus often constrains the spatial resolution and reach-length scales that can be monitored. Similar constraints exist for computational fluid dynamics (CFD) studies of large rivers, requiring maximization of mesh- or grid-cell dimensions and implying a reduction in the representation of bedform-roughness elements that are of the order of a model grid cell or less, even if they are represented in available topographic data. These "subgrid" elements must be parameterized, and this paper applies and considers the impact of roughness-length treatments that include the effect of bed roughness due to "unmeasured" topography. CFD predictions were found to be sensitive to the roughness-length specification. Model optimization was based on acoustic Doppler current profiler measurements and estimates of the water surface slope for a variety of roughness lengths. This proved difficult as the metrics used to assess optimal model performance diverged due to the effects of large bedforms that are not well parameterized in roughness-length treatments. However, the general spatial flow patterns are effectively predicted by the model. Changes in roughness length were shown to have a major impact upon flow routing at the channel scale. The results also indicate an absence of secondary flow circulation cells in the reached studied, and suggest simpler two-dimensional models may have great utility in the investigation of flow within large rivers.

  12. Getting to know you: the acquisition of new face representations in autism spectrum conditions.

    PubMed

    Churches, Owen; Damiano, Cara; Baron-Cohen, Simon; Ring, Howard

    2012-08-01

    Social difficulties form a part of the canonical description of autism spectrum conditions (ASC), and the development of familiarity with new faces is a key ability required to navigate the social world. Here, we investigated the acquisition of new face representations in ASC by analysing the N170 and N250 event-related potential components induced by a previously unfamiliar face that was embedded in a series of other unfamiliar faces. We found that participants with ASC developed a smaller N250 component to the target face, indicating that the development of new face representations is impaired. We also found that the participants with ASC showed a smaller N170 component to both the target and the nontarget faces. This highlights the role of the early stages of face detection, structural encoding and attention in the formation of face memories in the typical population and implicates the dysfunction of these stages in the manifestation of the social difficulties observed in ASC. PMID:22643237

  13. The Anterior Temporal Face Area Contains Invariant Representations of Face Identity That Can Persist Despite the Loss of Right FFA and OFA.

    PubMed

    Yang, Hua; Susilo, Tirta; Duchaine, Bradley

    2016-03-01

    Macaque neurophysiology found image-invariant representations of face identity in a face-selective patch in anterior temporal cortex. A face-selective area in human anterior temporal lobe (fATL) has been reported, but has not been reliably identified, and its function and relationship with posterior face areas is poorly understood. Here, we used fMRI adaptation and neuropsychology to ask whether fATL contains image-invariant representations of face identity, and if so, whether these representations require normal functioning of fusiform face area (FFA) and occipital face area (OFA). We first used a dynamic localizer to demonstrate that 14 of 16 normal subjects exhibit a highly selective right fATL. Next, we found evidence that this area subserves image-invariant representation of identity: Right fATL showed repetition suppression to the same identity across different images, while other areas did not. Finally, to examine fATL's relationship with posterior areas, we used the same procedures with Galen, an acquired prosopagnosic who lost right FFA and OFA. Despite the absence of posterior face areas, Galen's right fATL preserved its face selectivity and showed repetition suppression comparable to that in controls. Our findings suggest that right fATL contains image-invariant face representations that can persist despite the absence of right FFA and OFA, but these representations are not sufficient for normal face recognition.

  14. Identity-specific face adaptation effects: evidence for abstractive face representations.

    PubMed

    Hole, Graham

    2011-05-01

    The effects of selective adaptation on familiar face perception were examined. After prolonged exposure to photographs of a celebrity, participants saw a series of ambiguous morphs that were varying mixtures between the face of that person and a different celebrity. Participants judged fewer of the morphs to resemble the celebrity to which they had been adapted, implying that they were now less sensitive to that particular face. Similar results were obtained when the adapting faces were highly dissimilar in viewpoint to the test morphs; when they were presented upside-down; or when they were vertically stretched to three times their normal height. These effects rule out explanations of adaptation effects solely in terms of low-level image-based adaptation. Instead they are consistent with the idea that relatively viewpoint-independent, person-specific adaptation occurred, at the level of either the "Face Recognition Units" or "Person Identity Nodes" in Burton, Bruce and Johnston's (1990) model of face recognition. PMID:21316651

  15. Segmentally arranged somatotopy within the face representation of human primary somatosensory cortex.

    PubMed

    Moulton, Eric A; Pendse, Gautam; Morris, Susie; Aiello-Lammens, Matthew; Becerra, Lino; Borsook, David

    2009-03-01

    Though somatotypic representation within the face in human primary somatosensory cortex (S1) to innocuous stimuli is controversial; previous work suggests that painful heat is represented based on an "onion-skin" or segmental pattern on the face. The aim of this study was to determine if face somatotopy for brush stimuli in S1 also follows this segmental representation model. Twelve healthy subjects (nine men: three women) underwent functional magnetic resonance imaging to measure blood oxygen level dependent signals during brush (1 Hz, 15 s) applied to their faces. Separate functional scans were collected for brush stimuli repetitively applied to each of five separate stimulation sites on the right side of the face. These sites were arranged in a vertical, horizontal, and circular manner encompassing the three divisions of the trigeminal nerve. To minimize inter-individual morphological differences in the post-central gyrus across subjects, cortical surface-based registration was implemented before group statistical image analysis. Based on activation foci, somatotopic activation in the post-central gyrus was detected for brush, consistent with the segmental face representation model. PMID:18266215

  16. Evidence for Integrated Visual Face and Body Representations in the Anterior Temporal Lobes.

    PubMed

    Harry, Bronson B; Umla-Runge, Katja; Lawrence, Andrew D; Graham, Kim S; Downing, Paul E

    2016-08-01

    Research on visual face perception has revealed a region in the ventral anterior temporal lobes, often referred to as the anterior temporal face patch (ATFP), which responds strongly to images of faces. To date, the selectivity of the ATFP has been examined by contrasting responses to faces against a small selection of categories. Here, we assess the selectivity of the ATFP in humans with a broad range of visual control stimuli to provide a stronger test of face selectivity in this region. In Experiment 1, participants viewed images from 20 stimulus categories in an event-related fMRI design. Faces evoked more activity than all other 19 categories in the left ATFP. In the right ATFP, equally strong responses were observed for both faces and headless bodies. To pursue this unexpected finding, in Experiment 2, we used multivoxel pattern analysis to examine whether the strong response to face and body stimuli reflects a common coding of both classes or instead overlapping but distinct representations. On a voxel-by-voxel basis, face and whole-body responses were significantly positively correlated in the right ATFP, but face and body-part responses were not. This finding suggests that there is shared neural coding of faces and whole bodies in the right ATFP that does not extend to individual body parts. In contrast, the same approach revealed distinct face and body representations in the right fusiform gyrus. These results are indicative of an increasing convergence of distinct sources of person-related perceptual information proceeding from the posterior to the anterior temporal cortex.

  17. Image-based 3D modeling for the knowledge and the representation of archaeological dig and pottery: Sant'Omobono and Sarno project's strategies

    NASA Astrophysics Data System (ADS)

    Gianolio, S.; Mermati, F.; Genovese, G.

    2014-06-01

    This paper presents a "standard" method that is being developed by ARESlab of Rome's La Sapienza University for the documentation and the representation of the archaeological artifacts and structures through automatic photogrammetry software. The image-based 3D modeling technique was applied in two projects: in Sarno and in Rome. The first is a small city in Campania region along Via Popilia, known as the ancient way from Capua to Rhegion. The interest in this city is based on the recovery of over 2100 tombs from local necropolis that contained more than 100.000 artifacts collected in "Museo Nazionale Archeologico della Valle del Sarno". In Rome the project regards the archaeological area of Insula Volusiana placed in Forum Boarium close to Sant'Omobono sacred area. During the studies photographs were taken by Canon EOS 5D Mark II and Canon EOS 600D cameras. 3D model and meshes were created in Photoscan software. The TOF-CW Z+F IMAGER® 5006h laser scanner is used to dense data collection of archaeological area of Rome and to make a metric comparison between range-based and image-based techniques. In these projects the IBM as a low-cost technique proved to be a high accuracy improvement if planned correctly and it shown also how it helps to obtain a relief of complex strata and architectures compared to traditional manual documentation methods (e.g. two-dimensional drawings). The multidimensional recording can be used for future studies of the archaeological heritage, especially for the "destructive" character of an excavation. The presented methodology is suitable for the 3D registration and the accuracy of the methodology improved also the scientific value.

  18. Exploring Children's Face-Space: A Multidimensional Scaling Analysis of the Mental Representation of Facial Identity

    ERIC Educational Resources Information Center

    Nishimura, Mayu; Maurer, Daphne; Gao, Xiaoqing

    2009-01-01

    We explored differences in the mental representation of facial identity between 8-year-olds and adults. The 8-year-olds and adults made similarity judgments of a homogeneous set of faces (individual hair cues removed) using an "odd-man-out" paradigm. Multidimensional scaling (MDS) analyses were performed to represent perceived similarity of faces…

  19. Facing a Crisis in Representation: An Urban Superintendent and the Press.

    ERIC Educational Resources Information Center

    Brunner, C. Cryss

    The purpose of this paper is to examine--using single-case qualitative ethnographic methods--how an urban superintendent responded to a "crisis in representation" with the intent that such an examination will be instructive for other superintendents faced with similar challenges. opens with a description of methodology followed by a brief…

  20. Forgetting the Once-Seen Face: Estimating the Strength of an Eyewitness's Memory Representation

    ERIC Educational Resources Information Center

    Deffenbacher, Kenneth A.; Bornstein, Brian H.; McGorty, E. Kiernan; Penrod, Steven D.

    2008-01-01

    The fidelity of an eyewitness's memory representation is an issue of paramount forensic concern. Psychological science has been unable to offer more than vague generalities concerning the relation of retention interval to memory trace strength for the once-seen face. A meta-analysis of 53 facial memory studies produced a highly reliable…

  1. The representation of information about faces in the temporal and frontal lobes.

    PubMed

    Rolls, Edmund T

    2007-01-01

    Neurophysiological evidence is described showing that some neurons in the macaque inferior temporal visual cortex have responses that are invariant with respect to the position, size and view of faces and objects, and that these neurons show rapid processing and rapid learning. Which face or object is present is encoded using a distributed representation in which each neuron conveys independent information in its firing rate, with little information evident in the relative time of firing of different neurons. This ensemble encoding has the advantages of maximising the information in the representation useful for discrimination between stimuli using a simple weighted sum of the neuronal firing by the receiving neurons, generalisation and graceful degradation. These invariant representations are ideally suited to provide the inputs to brain regions such as the orbitofrontal cortex and amygdala that learn the reinforcement associations of an individual's face, for then the learning, and the appropriate social and emotional responses, generalise to other views of the same face. A theory is described of how such invariant representations may be produced in a hierarchically organised set of visual cortical areas with convergent connectivity. The theory proposes that neurons in these visual areas use a modified Hebb synaptic modification rule with a short-term memory trace to capture whatever can be captured at each stage that is invariant about objects as the objects change in retinal view, position, size and rotation. Another population of neurons in the cortex in the superior temporal sulcus encodes other aspects of faces such as face expression, eye gaze, face view and whether the head is moving. These neurons thus provide important additional inputs to parts of the brain such as the orbitofrontal cortex and amygdala that are involved in social communication and emotional behaviour. Outputs of these systems reach the amygdala, in which face-selective neurons are found

  2. Independent components analysis coupled with 3D-front-face fluorescence spectroscopy to study the interaction between plastic food packaging and olive oil.

    PubMed

    Kassouf, Amine; El Rakwe, Maria; Chebib, Hanna; Ducruet, Violette; Rutledge, Douglas N; Maalouly, Jacqueline

    2014-08-11

    Olive oil is one of the most valued sources of fats in the Mediterranean diet. Its storage was generally done using glass or metallic packaging materials. Nowadays, plastic packaging has gained worldwide spread for the storage of olive oil. However, plastics are not inert and interaction phenomena may occur between packaging materials and olive oil. In this study, extra virgin olive oil samples were submitted to accelerated interaction conditions, in contact with polypropylene (PP) and polylactide (PLA) plastic packaging materials. 3D-front-face fluorescence spectroscopy, being a simple, fast and non destructive analytical technique, was used to study this interaction. Independent components analysis (ICA) was used to analyze raw 3D-front-face fluorescence spectra of olive oil. ICA was able to highlight a probable effect of a migration of substances with antioxidant activity. The signals extracted by ICA corresponded to natural olive oil fluorophores (tocopherols and polyphenols) as well as newly formed ones which were tentatively identified as fluorescent oxidation products. Based on the extracted fluorescent signals, olive oil in contact with plastics had slower aging rates in comparison with reference oils. Peroxide and free acidity values validated the results obtained by ICA, related to olive oil oxidation rates. Sorbed olive oil in plastic was also quantified given that this sorption could induce a swelling of the polymer thus promoting migration.

  3. Face Recognition Using Sparse Representation-Based Classification on K-Nearest Subspace

    PubMed Central

    Mi, Jian-Xun; Liu, Jin-Xing

    2013-01-01

    The sparse representation-based classification (SRC) has been proven to be a robust face recognition method. However, its computational complexity is very high due to solving a complex -minimization problem. To improve the calculation efficiency, we propose a novel face recognition method, called sparse representation-based classification on k-nearest subspace (SRC-KNS). Our method first exploits the distance between the test image and the subspace of each individual class to determine the nearest subspaces and then performs SRC on the selected classes. Actually, SRC-KNS is able to reduce the scale of the sparse representation problem greatly and the computation to determine the nearest subspaces is quite simple. Therefore, SRC-KNS has a much lower computational complexity than the original SRC. In order to well recognize the occluded face images, we propose the modular SRC-KNS. For this modular method, face images are partitioned into a number of blocks first and then we propose an indicator to remove the contaminated blocks and choose the nearest subspaces. Finally, SRC is used to classify the occluded test sample in the new feature space. Compared to the approach used in the original SRC work, our modular SRC-KNS can greatly reduce the computational load. A number of face recognition experiments show that our methods have five times speed-up at least compared to the original SRC, while achieving comparable or even better recognition rates. PMID:23555671

  4. Robust Face Recognition via Minimum Error Entropy-Based Atomic Representation.

    PubMed

    Wang, Yulong; Tang, Yuan Yan; Li, Luoqing

    2015-12-01

    Representation-based classifiers (RCs) have attracted considerable attention in face recognition in recent years. However, most existing RCs use the mean square error (MSE) criterion as the cost function, which relies on the Gaussianity assumption of the error distribution and is sensitive to non-Gaussian noise. This may severely degrade the performance of MSE-based RCs in recognizing facial images with random occlusion and corruption. In this paper, we present a minimum error entropy-based atomic representation (MEEAR) framework for face recognition. Unlike existing MSE-based RCs, our framework is based on the minimum error entropy criterion, which is not dependent on the error distribution and shown to be more robust to noise. In particular, MEEAR can produce discriminative representation vector by minimizing the atomic norm regularized Renyi's entropy of the reconstruction error. The optimality conditions are provided for general atomic representation model. As a general framework, MEEAR can also be used as a platform to develop new classifiers. Two effective MEE-based RCs are proposed by defining appropriate atomic sets. The experimental results on popular face databases show that MEEAR can improve both the recognition accuracy and the reconstructed results compared with the state-of-the-art MSE-based RCs. PMID:26513784

  5. Robust Face Recognition via Minimum Error Entropy-Based Atomic Representation.

    PubMed

    Wang, Yulong; Tang, Yuan Yan; Li, Luoqing

    2015-12-01

    Representation-based classifiers (RCs) have attracted considerable attention in face recognition in recent years. However, most existing RCs use the mean square error (MSE) criterion as the cost function, which relies on the Gaussianity assumption of the error distribution and is sensitive to non-Gaussian noise. This may severely degrade the performance of MSE-based RCs in recognizing facial images with random occlusion and corruption. In this paper, we present a minimum error entropy-based atomic representation (MEEAR) framework for face recognition. Unlike existing MSE-based RCs, our framework is based on the minimum error entropy criterion, which is not dependent on the error distribution and shown to be more robust to noise. In particular, MEEAR can produce discriminative representation vector by minimizing the atomic norm regularized Renyi's entropy of the reconstruction error. The optimality conditions are provided for general atomic representation model. As a general framework, MEEAR can also be used as a platform to develop new classifiers. Two effective MEE-based RCs are proposed by defining appropriate atomic sets. The experimental results on popular face databases show that MEEAR can improve both the recognition accuracy and the reconstructed results compared with the state-of-the-art MSE-based RCs.

  6. Face recognition via edge-based Gabor feature representation for plastic surgery-altered images

    NASA Astrophysics Data System (ADS)

    Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.

    2014-12-01

    Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.

  7. A Lack of Sexual Dimorphism in Width-to-Height Ratio in White European Faces Using 2D Photographs, 3D Scans, and Anthropometry

    PubMed Central

    Kramer, Robin S. S.; Jones, Alex L.; Ward, Robert

    2012-01-01

    Facial width-to-height ratio has received a great deal of attention in recent research. Evidence from human skulls suggests that males have a larger relative facial width than females, and that this sexual dimorphism is an honest signal of masculinity, aggression, and related traits. However, evidence that this measure is sexually dimorphic in faces, rather than skulls, is surprisingly weak. We therefore investigated facial width-to-height ratio in three White European samples using three different methods of measurement: 2D photographs, 3D scans, and anthropometry. By measuring the same individuals with multiple methods, we demonstrated high agreement across all measures. However, we found no evidence of sexual dimorphism in the face. In our third study, we also found a link between facial width-to-height ratio and body mass index for both males and females, although this relationship did not account for the lack of dimorphism in our sample. While we showed sufficient power to detect differences between male and female width-to-height ratio, our results failed to support the general hypothesis of sexual dimorphism in the face. PMID:22880088

  8. Tracking and recognition face in videos with incremental local sparse representation model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang

    2013-10-01

    This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.

  9. Two-stage sparse representation-based face recognition with reconstructed images

    NASA Astrophysics Data System (ADS)

    Cheng, Guangtao; Song, Zhanjie; Lei, Yang; Han, Xiuning

    2014-09-01

    In order to address the challenges that both the training and testing images are contaminated by random pixels corruption, occlusion, and disguise, a robust face recognition algorithm based on two-stage sparse representation is proposed. Specifically, noises in the training images are first eliminated by low-rank matrix recovery. Then, by exploiting the first-stage sparse representation computed by solving a new extended ℓ1-minimization problem, noises in the testing image can be successfully removed. After the elimination, feature extraction techniques that are more discriminative but are sensitive to noise can be effectively performed on the reconstructed clean images, and the final classification is accomplished by utilizing the second-stage sparse representation obtained by solving the reduced ℓ1-minimization problem in a low-dimensional feature space. Extensive experiments are conducted on publicly available databases to verify the superiority and robustness of our algorithm.

  10. Low-Rank and Eigenface Based Sparse Representation for Face Recognition

    PubMed Central

    Hou, Yi-Fu; Sun, Zhan-Li; Chong, Yan-Wen; Zheng, Chun-Hou

    2014-01-01

    In this paper, based on low-rank representation and eigenface extraction, we present an improvement to the well known Sparse Representation based Classification (SRC). Firstly, the low-rank images of the face images of each individual in training subset are extracted by the Robust Principal Component Analysis (Robust PCA) to alleviate the influence of noises (e.g., illumination difference and occlusions). Secondly, Singular Value Decomposition (SVD) is applied to extract the eigenfaces from these low-rank and approximate images. Finally, we utilize these eigenfaces to construct a compact and discriminative dictionary for sparse representation. We evaluate our method on five popular databases. Experimental results demonstrate the effectiveness and robustness of our method. PMID:25334027

  11. Virtual images inspired consolidate collaborative representation-based classification method for face recognition

    NASA Astrophysics Data System (ADS)

    Liu, Shigang; Zhang, Xinxin; Peng, Yali; Cao, Han

    2016-07-01

    The collaborative representation-based classification method performs well in the field of classification of high-dimensional images such as face recognition. It utilizes training samples from all classes to represent a test sample and assigns a class label to the test sample using the representation residuals. However, this method still suffers from the problem that limited number of training sample influences the classification accuracy when applied to image classification. In this paper, we propose a modified collaborative representation-based classification method (MCRC), which exploits novel virtual images and can obtain high classification accuracy. The procedure to produce virtual images is very simple but the use of them can bring surprising performance improvement. The virtual images can sufficiently denote the features of original face images in some case. Extensive experimental results doubtlessly demonstrate that the proposed method can effectively improve the classification accuracy. This is mainly attributed to the integration of the collaborative representation and the proposed feature-information dominated virtual images.

  12. Implicit race bias decreases the similarity of neural representations of black and white faces.

    PubMed

    Brosch, Tobias; Bar-David, Eyal; Phelps, Elizabeth A

    2013-02-01

    Implicit race bias has been shown to affect decisions and behaviors. It may also change perceptual experience by increasing perceived differences between social groups. We investigated how this phenomenon may be expressed at the neural level by testing whether the distributed blood-oxygenation-level-dependent (BOLD) patterns representing Black and White faces are more dissimilar in participants with higher implicit race bias. We used multivoxel pattern analysis to predict the race of faces participants were viewing. We successfully predicted the race of the faces on the basis of BOLD activation patterns in early occipital visual cortex, occipital face area, and fusiform face area (FFA). Whereas BOLD activation patterns in early visual regions, likely reflecting different perceptual features, allowed successful prediction for all participants, successful prediction on the basis of BOLD activation patterns in FFA, a high-level face-processing region, was restricted to participants with high pro-White bias. These findings suggest that stronger implicit pro-White bias decreases the similarity of neural representations of Black and White faces.

  13. Conscious and Non-conscious Representations of Emotional Faces in Asperger's Syndrome.

    PubMed

    Chien, Vincent S C; Tsai, Arthur C; Yang, Han Hsuan; Tseng, Yi-Li; Savostyanov, Alexander N; Liou, Michelle

    2016-07-31

    Several neuroimaging studies have suggested that the low spatial frequency content in an emotional face mainly activates the amygdala, pulvinar, and superior colliculus especially with fearful faces(1-3). These regions constitute the limbic structure in non-conscious perception of emotions and modulate cortical activity either directly or indirectly(2). In contrast, the conscious representation of emotions is more pronounced in the anterior cingulate, prefrontal cortex, and somatosensory cortex for directing voluntary attention to details in faces(3,4). Asperger's syndrome (AS)(5,6) represents an atypical mental disturbance that affects sensory, affective and communicative abilities, without interfering with normal linguistic skills and intellectual ability. Several studies have found that functional deficits in the neural circuitry important for facial emotion recognition can partly explain social communication failure in patients with AS(7-9). In order to clarify the interplay between conscious and non-conscious representations of emotional faces in AS, an EEG experimental protocol is designed with two tasks involving emotionality evaluation of either photograph or line-drawing faces. A pilot study is introduced for selecting face stimuli that minimize the differences in reaction times and scores assigned to facial emotions between the pretested patients with AS and IQ/gender-matched healthy controls. Information from the pretested patients was used to develop the scoring system used for the emotionality evaluation. Research into facial emotions and visual stimuli with different spatial frequency contents has reached discrepant findings depending on the demographic characteristics of participants and task demands(2). The experimental protocol is intended to clarify deficits in patients with AS in processing emotional faces when compared with healthy controls by controlling for factors unrelated to recognition of facial emotions, such as task difficulty, IQ and

  14. Conscious and Non-conscious Representations of Emotional Faces in Asperger's Syndrome.

    PubMed

    Chien, Vincent S C; Tsai, Arthur C; Yang, Han Hsuan; Tseng, Yi-Li; Savostyanov, Alexander N; Liou, Michelle

    2016-01-01

    Several neuroimaging studies have suggested that the low spatial frequency content in an emotional face mainly activates the amygdala, pulvinar, and superior colliculus especially with fearful faces(1-3). These regions constitute the limbic structure in non-conscious perception of emotions and modulate cortical activity either directly or indirectly(2). In contrast, the conscious representation of emotions is more pronounced in the anterior cingulate, prefrontal cortex, and somatosensory cortex for directing voluntary attention to details in faces(3,4). Asperger's syndrome (AS)(5,6) represents an atypical mental disturbance that affects sensory, affective and communicative abilities, without interfering with normal linguistic skills and intellectual ability. Several studies have found that functional deficits in the neural circuitry important for facial emotion recognition can partly explain social communication failure in patients with AS(7-9). In order to clarify the interplay between conscious and non-conscious representations of emotional faces in AS, an EEG experimental protocol is designed with two tasks involving emotionality evaluation of either photograph or line-drawing faces. A pilot study is introduced for selecting face stimuli that minimize the differences in reaction times and scores assigned to facial emotions between the pretested patients with AS and IQ/gender-matched healthy controls. Information from the pretested patients was used to develop the scoring system used for the emotionality evaluation. Research into facial emotions and visual stimuli with different spatial frequency contents has reached discrepant findings depending on the demographic characteristics of participants and task demands(2). The experimental protocol is intended to clarify deficits in patients with AS in processing emotional faces when compared with healthy controls by controlling for factors unrelated to recognition of facial emotions, such as task difficulty, IQ and

  15. Feature-based face representations and image reconstruction from behavioral and neural data

    PubMed Central

    Nestor, Adrian; Plaut, David C.; Behrmann, Marlene

    2016-01-01

    The reconstruction of images from neural data can provide a unique window into the content of human perceptual representations. Although recent efforts have established the viability of this enterprise using functional magnetic resonance imaging (MRI) patterns, these efforts have relied on a variety of prespecified image features. Here, we take on the twofold task of deriving features directly from empirical data and of using these features for facial image reconstruction. First, we use a method akin to reverse correlation to derive visual features from functional MRI patterns elicited by a large set of homogeneous face exemplars. Then, we combine these features to reconstruct novel face images from the corresponding neural patterns. This approach allows us to estimate collections of features associated with different cortical areas as well as to successfully match image reconstructions to corresponding face exemplars. Furthermore, we establish the robustness and the utility of this approach by reconstructing images from patterns of behavioral data. From a theoretical perspective, the current results provide key insights into the nature of high-level visual representations, and from a practical perspective, these findings make possible a broad range of image-reconstruction applications via a straightforward methodological approach. PMID:26711997

  16. Face recognition based on fringe pattern analysis

    NASA Astrophysics Data System (ADS)

    Guo, Hong; Huang, Peisen

    2010-03-01

    Two-dimensional face-recognition techniques suffer from facial texture and illumination variations. Although 3-D techniques can overcome these limitations, the reconstruction and storage expenses of 3-D information are extremely high. We present a novel face-recognition method that directly utilizes 3-D information encoded in face fringe patterns without having to reconstruct 3-D geometry. In the proposed method, a digital video projector is employed to sequentially project three phase-shifted sinusoidal fringe patterns onto the subject's face. Meanwhile, a camera is used to capture the distorted fringe patterns from an offset angle. Afterward, the face fringe images are analyzed by the phase-shifting method and the Fourier transform method to obtain a spectral representation of the 3-D face. Finally, the eigenface algorithm is applied to the face-spectrum images to perform face recognition. Simulation and experimental results demonstrate that the proposed method achieved satisfactory recognition rates with reduced computational complexity and storage expenses.

  17. Judging Normality and Attractiveness in Faces: Direct Evidence of a More Refined Representation for Own-Race, Young Adult Faces.

    PubMed

    Zhou, Xiaomei; Short, Lindsey A; Chan, Harmonie S J; Mondloch, Catherine J

    2016-09-01

    Young and older adults are more sensitive to deviations from normality in young than older adult faces, suggesting that the dimensions of face space are optimized for young adult faces. Here, we extend these findings to own-race faces and provide converging evidence using an attractiveness rating task. In Experiment 1, Caucasian and Chinese adults were shown own- and other-race face pairs; one member was undistorted and the other had compressed or expanded features. Participants indicated which member of each pair was more normal (a task that requires referencing a norm) and which was more expanded (a task that simply requires discrimination). Participants showed an own-race advantage in the normality task but not the discrimination task. In Experiment 2, participants rated the facial attractiveness of own- and other-race faces (Experiment 2a) or young and older adult faces (Experiment 2b). Between-rater variability in ratings of individual faces was higher for other-race and older adult faces; reduced consensus in attractiveness judgments reflects a less refined face space. Collectively, these results provide direct evidence that the dimensions of face space are optimized for own-race and young adult faces, which may underlie face race- and age-based deficits in recognition.

  18. Judging Normality and Attractiveness in Faces: Direct Evidence of a More Refined Representation for Own-Race, Young Adult Faces.

    PubMed

    Zhou, Xiaomei; Short, Lindsey A; Chan, Harmonie S J; Mondloch, Catherine J

    2016-09-01

    Young and older adults are more sensitive to deviations from normality in young than older adult faces, suggesting that the dimensions of face space are optimized for young adult faces. Here, we extend these findings to own-race faces and provide converging evidence using an attractiveness rating task. In Experiment 1, Caucasian and Chinese adults were shown own- and other-race face pairs; one member was undistorted and the other had compressed or expanded features. Participants indicated which member of each pair was more normal (a task that requires referencing a norm) and which was more expanded (a task that simply requires discrimination). Participants showed an own-race advantage in the normality task but not the discrimination task. In Experiment 2, participants rated the facial attractiveness of own- and other-race faces (Experiment 2a) or young and older adult faces (Experiment 2b). Between-rater variability in ratings of individual faces was higher for other-race and older adult faces; reduced consensus in attractiveness judgments reflects a less refined face space. Collectively, these results provide direct evidence that the dimensions of face space are optimized for own-race and young adult faces, which may underlie face race- and age-based deficits in recognition. PMID:27335127

  19. Band-Reweighed Gabor Kernel Embedding for Face Image Representation and Recognition.

    PubMed

    Ren, Chuan-Xian; Dai, Dao-Qing; Li, Xiao-Xin; Lai, Zhao-Rong

    2014-02-01

    Face recognition with illumination or pose variation is a challenging problem in image processing and pattern recognition. A novel algorithm using band-reweighed Gabor kernel embedding to deal with the problem is proposed in this paper. For a given image, it is first transformed by a group of Gabor filters, which output Gabor features using different orientation and scale parameters. Fisher scoring function is used to measure the importance of features in each band, and then, the features with the largest scores are preserved for saving memory requirements. The reduced bands are combined by a vector, which is determined by a weighted kernel discriminant criterion and solved by a constrained quadratic programming method, and then, the weighted sum of these nonlinear bands is defined as the similarity between two images. Compared with existing concatenation-based Gabor feature representation and the uniformly weighted similarity calculation approaches, our method provides a new way to use Gabor features for face recognition and presents a reasonable interpretation for highlighting discriminant orientations and scales. The minimum Mahalanobis distance considering the spatial correlations within the data is exploited for feature matching, and the graphical lasso is used therein for directly estimating the sparse inverse covariance matrix. Experiments using benchmark databases show that our new algorithm improves the recognition results and obtains competitive performance.

  20. Structural properties of spatial representations in blind people: Scanning images constructed from haptic exploration or from locomotion in a 3-D audio virtual environment.

    PubMed

    Afonso, Amandine; Blum, Alan; Katz, Brian F G; Tarroux, Philippe; Borst, Grégoire; Denis, Michel

    2010-07-01

    When people scan mental images, their response times increase linearly with increases in the distance to be scanned, which is generally taken as reflecting the fact that their internal representations incorporate the metric properties of the corresponding objects. In view of this finding, we investigated the structural properties of spatial mental images created from nonvisual sources in three groups (blindfolded sighted, late blind, and congenitally blind). In Experiment 1, blindfolded sighted and late blind participants created metrically accurate spatial representations of a small-scale spatial configuration under both verbal and haptic learning conditions. In Experiment 2, late and congenitally blind participants generated accurate spatial mental images after both verbal and locomotor learning of a full-scale navigable space (created by an immersive audio virtual reality system), whereas blindfolded sighted participants were selectively impaired in their ability to generate precise spatial representations from locomotor experience. These results attest that in the context of a permanent lack of sight, encoding spatial information on the basis of the most reliable currently functional system (the sensorimotor system) is crucial for building a metrically accurate representation of a spatial environment. The results also highlight the potential of spatialized audio-rendering technology for exploring the spatial representations of visually impaired participants.

  1. Structural properties of spatial representations in blind people: Scanning images constructed from haptic exploration or from locomotion in a 3-D audio virtual environment.

    PubMed

    Afonso, Amandine; Blum, Alan; Katz, Brian F G; Tarroux, Philippe; Borst, Grégoire; Denis, Michel

    2010-07-01

    When people scan mental images, their response times increase linearly with increases in the distance to be scanned, which is generally taken as reflecting the fact that their internal representations incorporate the metric properties of the corresponding objects. In view of this finding, we investigated the structural properties of spatial mental images created from nonvisual sources in three groups (blindfolded sighted, late blind, and congenitally blind). In Experiment 1, blindfolded sighted and late blind participants created metrically accurate spatial representations of a small-scale spatial configuration under both verbal and haptic learning conditions. In Experiment 2, late and congenitally blind participants generated accurate spatial mental images after both verbal and locomotor learning of a full-scale navigable space (created by an immersive audio virtual reality system), whereas blindfolded sighted participants were selectively impaired in their ability to generate precise spatial representations from locomotor experience. These results attest that in the context of a permanent lack of sight, encoding spatial information on the basis of the most reliable currently functional system (the sensorimotor system) is crucial for building a metrically accurate representation of a spatial environment. The results also highlight the potential of spatialized audio-rendering technology for exploring the spatial representations of visually impaired participants. PMID:20551339

  2. Individuation training with other-race faces reduces preschoolers' implicit racial bias: a link between perceptual and social representation of faces in children.

    PubMed

    Xiao, Wen S; Fu, Genyue; Quinn, Paul C; Qin, Jinliang; Tanaka, James W; Pascalis, Olivier; Lee, Kang

    2015-07-01

    The present study examined whether perceptual individuation training with other-race faces could reduce preschool children's implicit racial bias. We used an 'angry = outgroup' paradigm to measure Chinese children's implicit racial bias against African individuals before and after training. In Experiment 1, children between 4 and 6 years were presented with angry or happy racially ambiguous faces that were morphed between Chinese and African faces. Initially, Chinese children demonstrated implicit racial bias: they categorized happy racially ambiguous faces as own-race (Chinese) and angry racially ambiguous faces as other-race (African). Then, the children participated in a training session where they learned to individuate African faces. Children's implicit racial bias was significantly reduced after training relative to that before training. Experiment 2 used the same procedure as Experiment 1, except that Chinese children were trained with own-race Chinese faces. These children did not display a significant reduction in implicit racial bias. Our results demonstrate that early implicit racial bias can be reduced by presenting children with other-race face individuation training, and support a linkage between perceptual and social representations of face information in children. PMID:25284211

  3. Individuation training with other-race faces reduces preschoolers' implicit racial bias: a link between perceptual and social representation of faces in children.

    PubMed

    Xiao, Wen S; Fu, Genyue; Quinn, Paul C; Qin, Jinliang; Tanaka, James W; Pascalis, Olivier; Lee, Kang

    2015-07-01

    The present study examined whether perceptual individuation training with other-race faces could reduce preschool children's implicit racial bias. We used an 'angry = outgroup' paradigm to measure Chinese children's implicit racial bias against African individuals before and after training. In Experiment 1, children between 4 and 6 years were presented with angry or happy racially ambiguous faces that were morphed between Chinese and African faces. Initially, Chinese children demonstrated implicit racial bias: they categorized happy racially ambiguous faces as own-race (Chinese) and angry racially ambiguous faces as other-race (African). Then, the children participated in a training session where they learned to individuate African faces. Children's implicit racial bias was significantly reduced after training relative to that before training. Experiment 2 used the same procedure as Experiment 1, except that Chinese children were trained with own-race Chinese faces. These children did not display a significant reduction in implicit racial bias. Our results demonstrate that early implicit racial bias can be reduced by presenting children with other-race face individuation training, and support a linkage between perceptual and social representations of face information in children.

  4. 3D representation of geochemical data, the corresponding alteration and associated REE mobility at the Ranger uranium deposit, Northern Territory, Australia

    NASA Astrophysics Data System (ADS)

    Fisher, Louise A.; Cleverley, James S.; Pownceby, Mark; MacRae, Colin

    2013-12-01

    Interrogation and 3D visualisation of multiple multi-element data sets collected at the Ranger 1 No. 3 uranium mine, in the Northern Territory of Australia, show a distinct and large-scale chemical zonation around the ore body. A central zone of Mg alteration, dominated by extensive clinochlore alteration, overprints a biotite-muscovite-K-feldspar assemblage which shows increasing loss of Na, Ba and Ca moving towards the ore body. Manipulation of pre-existing geochemical data and integration of new data collected from targeted `niche' samples make it possible to recognise chemical architecture within the system and identify potential fluid conduits. New trace element and rare earth element (REE) data show strong fractionation associated with the zoned alteration around the deposit and with fault planes that intersect and bound the deposit. Within the most altered portion of the system, isocon analysis indicates addition of elements including Mg, S, Cu, Au and Ni and removal of elements including Ca, K, Ba and Na within a zone of damage associated with ore precipitation. In the more distal parts of the system, processes of alteration and replacement associated with the mineralising system can be recognised. REE element data show enrichment in HREE centred about a characteristic peak in Dy in the high-grade ore zone while LREEs are enriched in the outermost portions of the system. The patterns recognised in 3D in zoning of geochemical groups and contoured S, K and Mg abundance and the observed REE patterns suggest a fluid flow regime in which fluids were predominately migrating upwards during ore deposition within the core of the ore system.

  5. Recognizing identity in the face of change: the development of an expression-independent representation of facial identity.

    PubMed

    Mian, Jasmine F; Mondloch, Catherine J

    2012-07-30

    Perceptual aftereffects have indicated that there is an asymmetry in the extent to which adults' representations of identity and expression are independent of one another. Their representation of expression is identity-dependent; the magnitude of expression aftereffects is reduced when the adaptation and test stimuli have different identities. In contrast, their representation of identity is expression-independent; the magnitude of identity aftereffects is independent of whether the adaptation and test stimuli pose the same expressions. Like adults, children's representation of expression is identity-dependent (Vida & Mondloch, 2009). Here we investigated whether they have an expression-dependent representation of facial identity. Adults and 8-year-olds (n = 20 per group) categorized faces in an identity continuum (Sue/Jen) after viewing an adapting stimulus that displayed the same or a different emotional expression. Both groups showed identity aftereffects that were not influenced by facial expression. We conclude that, like adults, 8-year-old children's representation of identity is expression-independent.

  6. Distinct representations of configural and part information across multiple face-selective regions of the human brain

    PubMed Central

    Golarai, Golijeh; Ghahremani, Dara G.; Eberhardt, Jennifer L.; Gabrieli, John D. E.

    2015-01-01

    Several regions of the human brain respond more strongly to faces than to other visual stimuli, such as regions in the amygdala (AMG), superior temporal sulcus (STS), and the fusiform face area (FFA). It is unclear if these brain regions are similar in representing the configuration or natural appearance of face parts. We used functional magnetic resonance imaging of healthy adults who viewed natural or schematic faces with internal parts that were either normally configured or randomly rearranged. Response amplitudes were reduced in the AMG and STS when subjects viewed stimuli whose configuration of parts were digitally rearranged, suggesting that these regions represent the 1st order configuration of face parts. In contrast, response amplitudes in the FFA showed little modulation whether face parts were rearranged or if the natural face parts were replaced with lines. Instead, FFA responses were reduced only when both configural and part information were reduced, revealing an interaction between these factors, suggesting distinct representation of 1st order face configuration and parts in the AMG and STS vs. the FFA. PMID:26594191

  7. Categorization, categorical perception, and asymmetry in infants' representation of face race.

    PubMed

    Anzures, Gizelle; Quinn, Paul C; Pascalis, Olivier; Slater, Alan M; Lee, Kang

    2010-07-01

    The present study examined whether 6- and 9-month-old Caucasian infants could categorize faces according to race. In Experiment 1, infants were familiarized with different female faces from a common ethnic background (i.e. either Caucasian or Asian) and then tested with female faces from a novel race category. Nine-month-olds were able to form discrete categories of Caucasian and Asian faces. However, 6-month-olds did not form discrete categories of faces based on race. In Experiment 2, a second group of 6- and 9-month-olds was tested to determine whether they could discriminate between different faces from the same race category. Results showed that both age groups could only discriminate between different faces from the own-race category of Caucasian faces. The findings of the two experiments taken together suggest that 9-month-olds formed a category of Caucasian faces that are further differentiated at the individual level. In contrast, although they could form a category of Asian faces, they could not discriminate between such other-race faces. This asymmetry in category formation at 9 months (i.e. categorization of own-race faces vs. categorical perception of other-race faces) suggests that differential experience with own- and other-race faces plays an important role in infants' acquisition of face processing abilities.

  8. Part A: Investigations of the Synthesis of Pyrazinochlorins and Other Porphyrin Derivatives. Part B: investigations of Student Translation Between 2-D/3-D Representations of Molecules

    NASA Astrophysics Data System (ADS)

    Dean, Michelle L.

    This dissertation will be composed of two parts. The first part was completed under the direction of Dr. Christian Bruckner and outlines the synthesis of porphyrins and related derivatives. It explores specifically the synthesis of pyrazinoporphyrin, a pyrrole-modified porphyrin, the use of microwaves for porphyrin synthesis, and the synthesis of a novel building block for use in an expanded porphyrin structure. Lastly, this part will describe a laboratory experiment, suitable for an organic chemistry course, which investigates the photophysical properties of porphyrins using brown eggs as a source of protoporphyrin IX. The second part, under the advisement of Dr. Tyson Miller, will detail research conducted on students' ability to translate between two-dimensional and three-dimensional representations of molecules. Using the Grounded Theory and a formal interview it was investigated what errors students make as they translate from a two-dimensional drawing to a three-dimensional model, and visa versa. This part also seeks to gain an understanding, through the use of phenomenography what was factors contribute to cognitive overload when drawing chiral centers.

  9. Task-Specific Codes for Face Recognition: How they Shape the Neural Representation of Features for Detection and Individuation

    PubMed Central

    2008-01-01

    Background The variety of ways in which faces are categorized makes face recognition challenging for both synthetic and biological vision systems. Here we focus on two face processing tasks, detection and individuation, and explore whether differences in task demands lead to differences both in the features most effective for automatic recognition and in the featural codes recruited by neural processing. Methodology/Principal Findings Our study appeals to a computational framework characterizing the features representing object categories as sets of overlapping image fragments. Within this framework, we assess the extent to which task-relevant information differs across image fragments. Based on objective differences we find among task-specific representations, we test the sensitivity of the human visual system to these different face descriptions independently of one another. Both behavior and functional magnetic resonance imaging reveal effects elicited by objective task-specific levels of information. Behaviorally, recognition performance with image fragments improves with increasing task-specific information carried by different face fragments. Neurally, this sensitivity to the two tasks manifests as differential localization of neural responses across the ventral visual pathway. Fragments diagnostic for detection evoke larger neural responses than non-diagnostic ones in the right posterior fusiform gyrus and bilaterally in the inferior occipital gyrus. In contrast, fragments diagnostic for individuation evoke larger responses than non-diagnostic ones in the anterior inferior temporal gyrus. Finally, for individuation only, pattern analysis reveals sensitivity to task-specific information within the right “fusiform face area”. Conclusions/Significance Our results demonstrate: 1) information diagnostic for face detection and individuation is roughly separable; 2) the human visual system is independently sensitive to both types of information; 3) neural

  10. The Development of Sex Category Representation in Infancy: Matching of Faces and Bodies

    ERIC Educational Resources Information Center

    Hock, Alyson; Kangas, Ashley; Zieber, Nicole; Bhatt, Ramesh S.

    2015-01-01

    Sex is a significant social category, and adults derive information about it from both faces and bodies. Research indicates that young infants process sex category information in faces. However, no prior study has examined whether infants derive sex categories from bodies and match faces and bodies in terms of sex. In the current study,…

  11. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  12. 3D cryo-electron reconstruction of BmrA, a bacterial multidrug ABC transporter in an inward-facing conformation and in a lipidic environment.

    PubMed

    Fribourg, Pierre Frederic; Chami, Mohamed; Sorzano, Carlos Oscar S; Gubellini, Francesca; Marabini, Roberto; Marco, Sergio; Jault, Jean-Michel; Lévy, Daniel

    2014-05-15

    ABC (ATP-binding cassette) membrane exporters are efflux transporters of a wide diversity of molecule across the membrane at the expense of ATP. A key issue regarding their catalytic cycle is whether or not their nucleotide-binding domains (NBDs) are physically disengaged in the resting state. To settle this controversy, we obtained structural data on BmrA, a bacterial multidrug homodimeric ABC transporter, in a membrane-embedded state. BmrA in the apostate was reconstituted in lipid bilayers forming a mixture of ring-shaped structures of 24 or 39 homodimers. Three-dimensional models of the ring-shaped structures of 24 or 39 homodimers were calculated at 2.3 nm and 2.5 nm resolution from cryo-electron microscopy, respectively. In these structures, BmrA adopts an inward-facing open conformation similar to that found in mouse P-glycoprotein structure with the NBDs separated by 3 nm. Both lipidic leaflets delimiting the transmembrane domains of BmrA were clearly resolved. In planar membrane sheets, the NBDs were even more separated. BmrA in an ATP-bound conformation was determined from two-dimensional crystals grown in the presence of ATP and vanadate. A projection map calculated at 1.6 nm resolution shows an open outward-facing conformation. Overall, the data are consistent with a mechanism of drug transport involving large conformational changes of BmrA and show that a bacterial ABC exporter can adopt at least two open inward conformations in lipid membrane.

  13. Neural representations of faces and body parts in macaque and human cortex: a comparative FMRI study.

    PubMed

    Pinsk, Mark A; Arcaro, Michael; Weiner, Kevin S; Kalkus, Jan F; Inati, Souheil J; Gross, Charles G; Kastner, Sabine

    2009-05-01

    Single-cell studies in the macaque have reported selective neural responses evoked by visual presentations of faces and bodies. Consistent with these findings, functional magnetic resonance imaging studies in humans and monkeys indicate that regions in temporal cortex respond preferentially to faces and bodies. However, it is not clear how these areas correspond across the two species. Here, we directly compared category-selective areas in macaques and humans using virtually identical techniques. In the macaque, several face- and body part-selective areas were found located along the superior temporal sulcus (STS) and middle temporal gyrus (MTG). In the human, similar to previous studies, face-selective areas were found in ventral occipital and temporal cortex and an additional face-selective area was found in the anterior temporal cortex. Face-selective areas were also found in lateral temporal cortex, including the previously reported posterior STS area. Body part-selective areas were identified in the human fusiform gyrus and lateral occipitotemporal cortex. In a first experiment, both monkey and human subjects were presented with pictures of faces, body parts, foods, scenes, and man-made objects, to examine the response profiles of each category-selective area to the five stimulus types. In a second experiment, face processing was examined by presenting upright and inverted faces. By comparing the responses and spatial relationships of the areas, we propose potential correspondences across species. Adjacent and overlapping areas in the macaque anterior STS/MTG responded strongly to both faces and body parts, similar to areas in the human fusiform gyrus and posterior STS. Furthermore, face-selective areas on the ventral bank of the STS/MTG discriminated both upright and inverted faces from objects, similar to areas in the human ventral temporal cortex. Overall, our findings demonstrate commonalities and differences in the wide-scale brain organization between

  14. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The Apollo implementation of PLOT3D uses some of the capabilities of

  15. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The Apollo implementation of PLOT3D uses some of the capabilities of

  16. The Representation and Processing of Familiar Faces in Dyslexia: Differences in Age of Acquisition Effects

    ERIC Educational Resources Information Center

    Smith-Spark, James H.; Moore, Viv

    2009-01-01

    Two under-explored areas of developmental dyslexia research, face naming and age of acquisition (AoA), were investigated. Eighteen dyslexic and 18 non-dyslexic university students named the faces of 50 well-known celebrities, matched for facial distinctiveness and familiarity. Twenty-five of the famous people were learned early in life, while the…

  17. The Representation of Information about Faces in the Temporal and Frontal Lobes

    ERIC Educational Resources Information Center

    Rolls, Edmund T.

    2007-01-01

    Neurophysiological evidence is described showing that some neurons in the macaque inferior temporal visual cortex have responses that are invariant with respect to the position, size and view of faces and objects, and that these neurons show rapid processing and rapid learning. Which face or object is present is encoded using a distributed…

  18. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  19. Under-Representation of Males in the Early Years: The Challenges Leaders Face

    ERIC Educational Resources Information Center

    Mistry, Malini; Sood, Krishan

    2013-01-01

    This article investigates why there appears to be an under-representation of males in comparison to their female colleagues in the Early Years (EY) sector, and the perception of male teachers progressing more quickly to leadership positions when they do enter this context. Using case studies of final year male students on an Initial Teacher…

  20. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  1. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  2. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  3. Histogram of Gabor phase patterns (HGPP): a novel object representation approach for face recognition.

    PubMed

    Zhang, Baochang; Shan, Shiguang; Chen, Xilin; Gao, Wen

    2007-01-01

    A novel object descriptor, histogram of Gabor phase pattern (HGPP), is proposed for robust face recognition. In HGPP, the quadrant-bit codes are first extracted from faces based on the Gabor transformation. Global Gabor phase pattern (GGPP) and local Gabor phase pattern (LGPP) are then proposed to encode the phase variations. GGPP captures the variations derived from the orientation changing of Gabor wavelet at a given scale (frequency), while LGPP encodes the local neighborhood variations by using a novel local XOR pattern (LXP) operator. They are both divided into the nonoverlapping rectangular regions, from which spatial histograms are extracted and concatenated into an extended histogram feature to represent the original image. Finally, the recognition is performed by using the nearest-neighbor classifier with histogram intersection as the similarity measurement. The features of HGPP lie in two aspects: 1) HGPP can describe the general face images robustly without the training procedure; 2) HGPP encodes the Gabor phase information, while most previous face recognition methods exploit the Gabor magnitude information. In addition, Fisher separation criterion is further used to improve the performance of HGPP by weighing the subregions of the image according to their discriminative powers. The proposed methods are successfully applied to face recognition, and the experiment results on the large-scale FERET and CAS-PEAL databases show that the proposed algorithms significantly outperform other well-known systems in terms of recognition rate.

  4. Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei

    2015-02-01

    Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration.

  5. FastScript3D - A Companion to Java 3D

    NASA Technical Reports Server (NTRS)

    Koenig, Patti

    2005-01-01

    FastScript3D is a computer program, written in the Java 3D(TM) programming language, that establishes an alternative language that helps users who lack expertise in Java 3D to use Java 3D for constructing three-dimensional (3D)-appearing graphics. The FastScript3D language provides a set of simple, intuitive, one-line text-string commands for creating, controlling, and animating 3D models. The first word in a string is the name of a command; the rest of the string contains the data arguments for the command. The commands can also be used as an aid to learning Java 3D. Developers can extend the language by adding custom text-string commands. The commands can define new 3D objects or load representations of 3D objects from files in formats compatible with such other software systems as X3D. The text strings can be easily integrated into other languages. FastScript3D facilitates communication between scripting languages [which enable programming of hyper-text markup language (HTML) documents to interact with users] and Java 3D. The FastScript3D language can be extended and customized on both the scripting side and the Java 3D side.

  6. Geometric Figure-Rotation Task and Face Representation in Dyslexia: Role of Spatial Relations and Orientation.

    ERIC Educational Resources Information Center

    Pontius, Anneliese A.

    1981-01-01

    Compared to normal readers, the dyslexic children not only drew significantly more "neolithic faces" but also made more errors of spatial displacement (up/down or right/left) on parts of asymmetric figures, while among both groups there were similar percentages who made no errors in the global rotation of figures. (Author/SJL)

  7. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  8. Why it's easier to remember seeing a face we already know than one we don't: preexisting memory representations facilitate memory formation.

    PubMed

    Reder, Lynne M; Victoria, Lindsay W; Manelis, Anna; Oates, Joyce M; Dutcher, Janine M; Bates, Jordan T; Cook, Shaun; Aizenstein, Howard J; Quinlan, Joseph; Gyulai, Ferenc

    2013-03-01

    In two experiments, we provided support for the hypothesis that stimuli with preexisting memory representations (e.g., famous faces) are easier to associate to their encoding context than are stimuli that lack long-term memory representations (e.g., unknown faces). Subjects viewed faces superimposed on different backgrounds (e.g., the Eiffel Tower). Face recognition on a surprise memory test was better when the encoding background was reinstated than when it was swapped with a different background; however, the reinstatement advantage was modulated by how many faces had been seen with a given background, and reinstatement did not improve recognition for unknown faces. The follow-up experiment added a drug intervention that inhibited the ability to form new associations. Context reinstatement did not improve recognition for famous or unknown faces under the influence of the drug. The results suggest that it is easier to associate context to faces that have a preexisting long-term memory representation than to faces that do not.

  9. 3D facial expression modeling for recognition

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoguang; Jain, Anil K.; Dass, Sarat C.

    2005-03-01

    Current two-dimensional image based face recognition systems encounter difficulties with large variations in facial appearance due to the pose, illumination and expression changes. Utilizing 3D information of human faces is promising for handling the pose and lighting variations. While the 3D shape of a face does not change due to head pose (rigid) and lighting changes, it is not invariant to the non-rigid facial movement and evolution, such as expressions and aging effect. We propose a facial surface matching framework to match multiview facial scans to a 3D face model, where the (non-rigid) expression deformation is explicitly modeled for each subject, resulting in a person-specific deformation model. The thin plate spline (TPS) is applied to model the deformation based on the facial landmarks. The deformation is applied to the 3D neutral expression face model to synthesize the corresponding expression. Both the neutral and the synthesized 3D surface models are used to match a test scan. The surface registration and matching between a test scan and a 3D model are achieved by a modified Iterative Closest Point (ICP) algorithm. Preliminary experimental results demonstrate that the proposed expression modeling and recognition-by-synthesis schemes improve the 3D matching accuracy.

  10. 3-D Maps and Compasses in the Brain.

    PubMed

    Finkelstein, Arseny; Las, Liora; Ulanovsky, Nachum

    2016-07-01

    The world has a complex, three-dimensional (3-D) spatial structure, but until recently the neural representation of space was studied primarily in planar horizontal environments. Here we review the emerging literature on allocentric spatial representations in 3-D and discuss the relations between 3-D spatial perception and the underlying neural codes. We suggest that the statistics of movements through space determine the topology and the dimensionality of the neural representation, across species and different behavioral modes. We argue that hippocampal place-cell maps are metric in all three dimensions, and might be composed of 2-D and 3-D fragments that are stitched together into a global 3-D metric representation via the 3-D head-direction cells. Finally, we propose that the hippocampal formation might implement a neural analogue of a Kalman filter, a standard engineering algorithm used for 3-D navigation. PMID:27442069

  11. bioWeb3D: an online webGL 3D data visualisation tool

    PubMed Central

    2013-01-01

    Background Data visualization is critical for interpreting biological data. However, in practice it can prove to be a bottleneck for non trained researchers; this is especially true for three dimensional (3D) data representation. Whilst existing software can provide all necessary functionalities to represent and manipulate biological 3D datasets, very few are easily accessible (browser based), cross platform and accessible to non-expert users. Results An online HTML5/WebGL based 3D visualisation tool has been developed to allow biologists to quickly and easily view interactive and customizable three dimensional representations of their data along with multiple layers of information. Using the WebGL library Three.js written in Javascript, bioWeb3D allows the simultaneous visualisation of multiple large datasets inputted via a simple JSON, XML or CSV file, which can be read and analysed locally thanks to HTML5 capabilities. Conclusions Using basic 3D representation techniques in a technologically innovative context, we provide a program that is not intended to compete with professional 3D representation software, but that instead enables a quick and intuitive representation of reasonably large 3D datasets. PMID:23758781

  12. [The influence of camera-to-object distance and focal length on the representation of faces].

    PubMed

    Verhoff, Marcel A; Witzel, Carsten; Ramsthaler, Frank; Kreutz, Kerstin

    2007-01-01

    When one thinks of the so-called barrel or wide-angle distortion, grotesquely warped faces may come to mind. For less extreme cases with primarily inconspicuous facial proportions, the question, however, still arises whether there may be a resulting impact on the identification of faces. In the first experiment, 3 test persons were photographed at a fixed camera-to-object distance of 2 m. In the second experiment, 18 test persons were each photographed at a distance of 0.5 m and 2.0 m. For both experiments photographs were taken from a fixed angle of view in alignment with the Frankfurt Plane. An isolated effect of the focal length on facial proportions could not be demonstrated. On the other hand, changes in the camera-to-object distance clearly influenced facial proportions and shape. A standardized camera-to-object distance for passport photos, as well as reconstruction of the camera-to-object distance from crime scene photos and the use of this same distance in taking photographs for comparison of suspects are called for. A proposal to refer to wide-angle distortion as the nearness effect is put forward. PMID:17879705

  13. Self assembled structures for 3D integration

    NASA Astrophysics Data System (ADS)

    Rao, Madhav

    Three dimensional (3D) micro-scale structures attached to a silicon substrate have various applications in microelectronics. However, formation of 3D structures using conventional micro-fabrication techniques are not efficient and require precise control of processing parameters. Self assembly is a method for creating 3D structures that takes advantage of surface area minimization phenomena. Solder based self assembly (SBSA), the subject of this dissertation, uses solder as a facilitator in the formation of 3D structures from 2D patterns. Etching a sacrificial layer underneath a portion of the 2D pattern allows the solder reflow step to pull those areas out of the substrate plane resulting in a folded 3D structure. Initial studies using the SBSA method demonstrated low yields in the formation of five different polyhedra. The failures in folding were primarily attributed to nonuniform solder deposition on the underlying metal pads. The dip soldering method was analyzed and subsequently refined. A modified dip soldering process provided improved yield among the polyhedra. Solder bridging referred as joining of solder deposited on different metal patterns in an entity influenced the folding mechanism. In general, design parameters such as small gap-spacings and thick metal pads were found to favor solder bridging for all patterns studied. Two types of soldering: face and edge soldering were analyzed. Face soldering refers to the application of solder on the entire metal face. Edge soldering indicates application of solder only on the edges of the metal face. Mechanical grinding showed that face soldered SBSA structures were void free and robust in nature. In addition, the face soldered 3D structures provide a consistent heat resistant solder standoff height that serve as attachments in the integration of dissimilar electronic technologies. Face soldered 3D structures were developed on the underlying conducting channel to determine the thermo-electric reliability of

  14. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  15. [Real time 3D echocardiography].

    PubMed

    Bauer, F; Shiota, T; Thomas, J D

    2001-07-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients. PMID:11494630

  16. Multi-view and 3D deformable part models.

    PubMed

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ). PMID:26440264

  17. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  18. 3-D Seismic Interpretation

    NASA Astrophysics Data System (ADS)

    Moore, Gregory F.

    2009-05-01

    This volume is a brief introduction aimed at those who wish to gain a basic and relatively quick understanding of the interpretation of three-dimensional (3-D) seismic reflection data. The book is well written, clearly illustrated, and easy to follow. Enough elementary mathematics are presented for a basic understanding of seismic methods, but more complex mathematical derivations are avoided. References are listed for readers interested in more advanced explanations. After a brief introduction, the book logically begins with a succinct chapter on modern 3-D seismic data acquisition and processing. Standard 3-D acquisition methods are presented, and an appendix expands on more recent acquisition techniques, such as multiple-azimuth and wide-azimuth acquisition. Although this chapter covers the basics of standard time processing quite well, there is only a single sentence about prestack depth imaging, and anisotropic processing is not mentioned at all, even though both techniques are now becoming standard.

  19. Gravity and spatial orientation in virtual 3D-mazes.

    PubMed

    Vidal, Manuel; Lipshits, Mark; McIntyre, Joseph; Berthoz, Alain

    2003-01-01

    In order to bring new insights into the processing of 3D spatial information, we conducted experiments on the capacity of human subjects to memorize 3D-structured environments, such as buildings with several floors or the potentially complex 3D structure of an orbital space station. We had subjects move passively in one of two different exploration modes, through a visual virtual environment that consisted of a series of connected tunnels. In upright displacement, self-rotation when going around corners in the tunnels was limited to yaw rotations. For horizontal translations, subjects faced forward in the direction of motion. When moving up or down through vertical segments of the 3D tunnels, however, subjects facing the tunnel wall, remaining upright as if moving up and down in a glass elevator. In the unconstrained displacement mode, subjects would appear to climb or dive face-forward when moving vertically; thus, in this mode subjects could experience visual flow consistent with rotations about any of the 3 canonical axes. In a previous experiment, subjects were asked to determine whether a static, outside view of a test tunnel corresponded or not to the tunnel through which they had just passed. Results showed that performance was better on this task for the upright than for the unconstrained displacement mode; i.e. when subjects remained "upright" with respect to the virtual environment as defined by subject's posture in the first segment. This effect suggests that gravity may provide a key reference frame used in the shift between egocentric and allocentric representations of the 3D virtual world. To check whether it is the polarizing effects of gravity that leads to the favoring of the upright displacement mode, the experimental paradigm was adapted for orbital flight and performed by cosmonauts onboard the International Space Station. For these flight experiments the previous recognition task was replaced by a computerized reconstruction task, which proved

  20. Bootstrapping 3D fermions

    DOE PAGES

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  1. Intracortical and Thalamocortical Connections of the Hand and Face Representations in Somatosensory Area 3b of Macaque Monkeys and Effects of Chronic Spinal Cord Injuries.

    PubMed

    Chand, Prem; Jain, Neeraj

    2015-09-30

    Brains of adult monkeys with chronic lesions of dorsal columns of spinal cord at cervical levels undergo large-scale reorganization. Reorganization results in expansion of intact chin inputs, which reactivate neurons in the deafferented hand representation in the primary somatosensory cortex (area 3b), ventroposterior nucleus of the thalamus and cuneate nucleus of the brainstem. A likely contributing mechanism for this large-scale plasticity is sprouting of axons across the hand-face border. Here we determined whether such sprouting takes place in area 3b. We first determined the extent of intrinsic corticocortical connectivity between the hand and the face representations in normal area 3b. Small amounts of neuroanatomical tracers were injected in these representations close to the electrophysiologically determined hand-face border. Locations of the labeled neurons were mapped with respect to the detailed electrophysiological somatotopic maps and histologically determined hand-face border revealed in sections of the flattened cortex stained for myelin. Results show that intracortical projections across the hand-face border are few. In monkeys with chronic unilateral lesions of the dorsal columns and expanded chin representation, connections across the hand-face border were not different compared with normal monkeys. Thalamocortical connections from the hand and face representations in the ventroposterior nucleus to area 3b also remained unaltered after injury. The results show that sprouting of intrinsic connections in area 3b or the thalamocortical inputs does not contribute to large-scale cortical plasticity. Significance statement: Long-term injuries to dorsal spinal cord in adult primates result in large-scale somatotopic reorganization due to which chin inputs expand into the deafferented hand region. Reorganization takes place in multiple cortical areas, and thalamic and medullary nuclei. To what extent this brain reorganization due to dorsal column injuries

  2. Intracortical and Thalamocortical Connections of the Hand and Face Representations in Somatosensory Area 3b of Macaque Monkeys and Effects of Chronic Spinal Cord Injuries.

    PubMed

    Chand, Prem; Jain, Neeraj

    2015-09-30

    Brains of adult monkeys with chronic lesions of dorsal columns of spinal cord at cervical levels undergo large-scale reorganization. Reorganization results in expansion of intact chin inputs, which reactivate neurons in the deafferented hand representation in the primary somatosensory cortex (area 3b), ventroposterior nucleus of the thalamus and cuneate nucleus of the brainstem. A likely contributing mechanism for this large-scale plasticity is sprouting of axons across the hand-face border. Here we determined whether such sprouting takes place in area 3b. We first determined the extent of intrinsic corticocortical connectivity between the hand and the face representations in normal area 3b. Small amounts of neuroanatomical tracers were injected in these representations close to the electrophysiologically determined hand-face border. Locations of the labeled neurons were mapped with respect to the detailed electrophysiological somatotopic maps and histologically determined hand-face border revealed in sections of the flattened cortex stained for myelin. Results show that intracortical projections across the hand-face border are few. In monkeys with chronic unilateral lesions of the dorsal columns and expanded chin representation, connections across the hand-face border were not different compared with normal monkeys. Thalamocortical connections from the hand and face representations in the ventroposterior nucleus to area 3b also remained unaltered after injury. The results show that sprouting of intrinsic connections in area 3b or the thalamocortical inputs does not contribute to large-scale cortical plasticity. Significance statement: Long-term injuries to dorsal spinal cord in adult primates result in large-scale somatotopic reorganization due to which chin inputs expand into the deafferented hand region. Reorganization takes place in multiple cortical areas, and thalamic and medullary nuclei. To what extent this brain reorganization due to dorsal column injuries

  3. The Digital Space Shuttle, 3D Graphics, and Knowledge Management

    NASA Technical Reports Server (NTRS)

    Gomez, Julian E.; Keller, Paul J.

    2003-01-01

    The Digital Shuttle is a knowledge management project that seeks to define symbiotic relationships between 3D graphics and formal knowledge representations (ontologies). 3D graphics provides geometric and visual content, in 2D and 3D CAD forms, and the capability to display systems knowledge. Because the data is so heterogeneous, and the interrelated data structures are complex, 3D graphics combined with ontologies provides mechanisms for navigating the data and visualizing relationships.

  4. Venus in 3D

    NASA Astrophysics Data System (ADS)

    Plaut, J. J.

    1993-08-01

    Stereographic images of the surface of Venus which enable geologists to reconstruct the details of the planet's evolution are discussed. The 120-meter resolution of these 3D images make it possible to construct digital topographic maps from which precise measurements can be made of the heights, depths, slopes, and volumes of geologic structures.

  5. 3D reservoir visualization

    SciTech Connect

    Van, B.T.; Pajon, J.L.; Joseph, P. )

    1991-11-01

    This paper shows how some simple 3D computer graphics tools can be combined to provide efficient software for visualizing and analyzing data obtained from reservoir simulators and geological simulations. The animation and interactive capabilities of the software quickly provide a deep understanding of the fluid-flow behavior and an accurate idea of the internal architecture of a reservoir.

  6. Integration of 3D structure from disparity into biological motion perception independent of depth awareness.

    PubMed

    Wang, Ying; Jiang, Yi

    2014-01-01

    Images projected onto the retinas of our two eyes come from slightly different directions in the real world, constituting binocular disparity that serves as an important source for depth perception - the ability to see the world in three dimensions. It remains unclear whether the integration of disparity cues into visual perception depends on the conscious representation of stereoscopic depth. Here we report evidence that, even without inducing discernible perceptual representations, the disparity-defined depth information could still modulate the visual processing of 3D objects in depth-irrelevant aspects. Specifically, observers who could not discriminate disparity-defined in-depth facing orientations of biological motions (i.e., approaching vs. receding) due to an excessive perceptual bias nevertheless exhibited a robust perceptual asymmetry in response to the indistinguishable facing orientations, similar to those who could consciously discriminate such 3D information. These results clearly demonstrate that the visual processing of biological motion engages the disparity cues independent of observers' depth awareness. The extraction and utilization of binocular depth signals thus can be dissociable from the conscious representation of 3D structure in high-level visual perception.

  7. Design of 3d Topological Data Structure for 3d Cadastre Objects

    NASA Astrophysics Data System (ADS)

    Zulkifli, N. A.; Rahman, A. Abdul; Hassan, M. I.

    2016-09-01

    This paper describes the design of 3D modelling and topological data structure for cadastre objects based on Land Administration Domain Model (LADM) specifications. Tetrahedral Network (TEN) is selected as a 3D topological data structure for this project. Data modelling is based on the LADM standard and it is used five classes (i.e. point, boundary face string, boundary face, tetrahedron and spatial unit). This research aims to enhance the current cadastral system by incorporating 3D topology model based on LADM standard.

  8. RAG-3D: A search tool for RNA 3D substructures

    DOE PAGES

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally describedmore » in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.« less

  9. RAG-3D: A search tool for RNA 3D substructures

    SciTech Connect

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.

  10. RAG-3D: a search tool for RNA 3D substructures

    PubMed Central

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-01-01

    To address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding. PMID:26304547

  11. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  12. To What Degree Does Handling Concrete Molecular Models Promote the Ability to Translate and Coordinate between 2D and 3D Molecular Structure Representations? A Case Study with Algerian Students

    ERIC Educational Resources Information Center

    Mohamed-Salah, Boukhechem; Alain, Dumon

    2016-01-01

    This study aims to assess whether the handling of concrete ball-and-stick molecular models promotes translation between diagrammatic representations and a concrete model (or vice versa) and the coordination of the different types of structural representations of a given molecular structure. Forty-one Algerian undergraduate students were requested…

  13. 3D Visualization Development of SIUE Campus

    NASA Astrophysics Data System (ADS)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  14. 3D rapid mapping

    NASA Astrophysics Data System (ADS)

    Isaksson, Folke; Borg, Johan; Haglund, Leif

    2008-04-01

    In this paper the performance of passive range measurement imaging using stereo technique in real time applications is described. Stereo vision uses multiple images to get depth resolution in a similar way as Synthetic Aperture Radar (SAR) uses multiple measurements to obtain better spatial resolution. This technique has been used in photogrammetry for a long time but it will be shown that it is now possible to do the calculations, with carefully designed image processing algorithms, in e.g. a PC in real time. In order to get high resolution and quantitative data in the stereo estimation a mathematical camera model is used. The parameters to the camera model are settled in a calibration rig or in the case of a moving camera the scene itself can be used for calibration of most of the parameters. After calibration an ordinary TV camera has an angular resolution like a theodolite, but to a much lower price. The paper will present results from high resolution 3D imagery from air to ground. The 3D-results from stereo calculation of image pairs are stitched together into a large database to form a 3D-model of the area covered.

  15. Factors affecting metal-metal bonding in the face-shared d(3)d(3) bioctahedral dimer systems, MM'Cl(9)(5-) (M, M' = V, Nb, Ta).

    PubMed

    Petrie, Simon; Stranger, Robert

    2002-12-01

    Density functional theory (DFT) calculations have been used to investigate the d(3)d(3) bioctahedral complexes, MM'Cl(9)(5-), of the vanadium triad. Broken-symmetry calculations upon these species indicate that the V-containing complexes have optimized metal-metal separations of 3.4-3.5 A, corresponding to essentially localized magnetic electrons. The metal-metal separations in these weakly coupled dimers are elongated as a consequence of Coulombic repulsion, which profoundly influences (and destabilizes) the gas-phase structures for such dimers; nevertheless, the intermetallic interactions in the V-containing dimers involve significantly greater metal-metal bonding character than in the analogous Cr-containing dimers. These observations all show good agreement with existing experimental (solid state) results for the chloride-bridged, face-shared dimers V(2)Cl(9)(5-) and V(2)Cl(3)(thf)(6)(+). In contrast to the V-containing dimers, complexes featuring only Nb and Ta have much shorter intermetallic distances (approximately 2.4 A) consistent with d-electron delocalization and formal metal-metal triple bond formation; again, good agreement is found with available experimental data. Calculations on the complexes V(2)(mu-Cl)(3)(dme)(6)(+), Nb(2)(mu-dms)(3)Cl(6)(2-), and Ta(2)(mu-dms)(3)Cl(6)(2-), which are closely related to compounds for which crystallographic structural data exist, have been pursued and provide an insight into the intermetallic interactions in the experimentally characterized complexes. Analysis of the contributions from d-orbital overlap (E(ovlp)) stabilization, as well as spin polarization (exchange) stabilization of localized d electrons (E(spe)), has also been attempted for the MM'Cl(9)(5-) dimers. While E(ovlp) clearly dominates over E(spe) as a stabilizing factor in those dimers containing only Nb and Ta metal atoms, detailed assessment of the competition between E(ovlp) and E(spe) for V-containing dimers is obstructed by the instability of

  16. Taming supersymmetric defects in 3d-3d correspondence

    NASA Astrophysics Data System (ADS)

    Gang, Dongmin; Kim, Nakwoo; Romo, Mauricio; Yamazaki, Masahito

    2016-07-01

    We study knots in 3d Chern-Simons theory with complex gauge group {SL}(N,{{C}}), in the context of its relation with 3d { N }=2 theory (the so-called 3d-3d correspondence). The defect has either co-dimension 2 or co-dimension 4 inside the 6d (2,0) theory, which is compactified on a 3-manifold \\hat{M}. We identify such defects in various corners of the 3d-3d correspondence, namely in 3d {SL}(N,{{C}}) CS theory, in 3d { N }=2 theory, in 5d { N }=2 super Yang-Mills theory, and in the M-theory holographic dual. We can make quantitative checks of the 3d-3d correspondence by computing partition functions at each of these theories. This Letter is a companion to a longer paper [1], which contains more details and more results.

  17. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  18. Urbanisation and 3d Spatial - a Geometric Approach

    NASA Astrophysics Data System (ADS)

    Duncan, E. E.; Rahman, A. Abdul

    2013-09-01

    Urbanisation creates immense competition for space, this may be attributed to an increase in population owing to domestic and external tourism. Most cities are constantly exploring all avenues in maximising its limited space. Hence, urban or city authorities need to plan, expand and use such three dimensional (3D) space above, on and below the city space. Thus, difficulties in property ownership and the geometric representation of the 3D city space is a major challenge. This research, investigates the concept of representing a geometric topological 3D spatial model capable of representing 3D volume parcels for man-made constructions above and below the 3D surface volume parcel. A review of spatial data models suggests that the 3D TIN (TEN) model is significant and can be used as a unified model. The concepts, logical and physical models of 3D TIN for 3D volumes using tetrahedrons as the base geometry is presented and implemented to show man-made constructions above and below the surface parcel within a user friendly graphical interface. Concepts for 3D topology and 3D analysis are discussed. Simulations of this model for 3D cadastre are implemented. This model can be adopted by most countries to enhance and streamline geometric 3D property ownership for urban centres. 3D TIN concept for spatial modelling can be adopted for the LA_Spatial part of the Land Administration Domain Model (LADM) (ISO/TC211, 2012), this satisfies the concept of 3D volumes.

  19. 3-D Finite Element Heat Transfer

    1992-02-01

    TOPAZ3D is a three-dimensional implicit finite element computer code for heat transfer analysis. TOPAZ3D can be used to solve for the steady-state or transient temperature field on three-dimensional geometries. Material properties may be temperature-dependent and either isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions can be specified including temperature, flux, convection, and radiation. By implementing the user subroutine feature, users can model chemical reaction kinetics and allow for any type of functionalmore » representation of boundary conditions and internal heat generation. TOPAZ3D can solve problems of diffuse and specular band radiation in an enclosure coupled with conduction in the material surrounding the enclosure. Additional features include thermal contact resistance across an interface, bulk fluids, phase change, and energy balances.« less

  20. What Are the Learning Affordances of 3-D Virtual Environments?

    ERIC Educational Resources Information Center

    Dalgarno, Barney; Lee, Mark J. W.

    2010-01-01

    This article explores the potential learning benefits of three-dimensional (3-D) virtual learning environments (VLEs). Drawing on published research spanning two decades, it identifies a set of unique characteristics of 3-D VLEs, which includes aspects of their representational fidelity and aspects of the learner-computer interactivity they…

  1. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  2. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  3. Martian terrain - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This area of terrain near the Sagan Memorial Station was taken on Sol 3 by the Imager for Mars Pathfinder (IMP). 3D glasses are necessary to identify surface detail.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  4. Individuation Training with Other-Race Faces Reduces Preschoolers' Implicit Racial Bias: A Link between Perceptual and Social Representation of Faces in Children

    ERIC Educational Resources Information Center

    Xiao, Wen S.; Fu, Genyue; Quinn, Paul C.; Qin, Jinliang; Tanaka, James W.; Pascalis, Olivier; Lee, Kang

    2015-01-01

    The present study examined whether perceptual individuation training with other-race faces could reduce preschool children's implicit racial bias. We used an "angry = outgroup" paradigm to measure Chinese children's implicit racial bias against African individuals before and after training. In Experiment 1, children between 4 and 6 years…

  5. An overview of 3D software visualization.

    PubMed

    Teyseyre, Alfredo R; Campo, Marcelo R

    2009-01-01

    Software visualization studies techniques and methods for graphically representing different aspects of software. Its main goal is to enhance, simplify and clarify the mental representation a software engineer has of a computer system. During many years, visualization in 2D space has been actively studied, but in the last decade, researchers have begun to explore new 3D representations for visualizing software. In this article, we present an overview of current research in the area, describing several major aspects like: visual representations, interaction issues, evaluation methods and development tools. We also perform a survey of some representative tools to support different tasks, i.e., software maintenance and comprehension, requirements validation and algorithm animation for educational purposes, among others. Finally, we conclude identifying future research directions. PMID:19008558

  6. Development of visual 3D virtual environment for control software

    NASA Technical Reports Server (NTRS)

    Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence

    1991-01-01

    Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D

  7. CASTLE3D - A Computer Aided System for Labelling Archaeological Excavations in 3D

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Borrmann, D.; Elseberg, J.; Nüchter, A.; Näth, F.; Winkler, S.

    2015-08-01

    Documentation of archaeological excavation sites with conventional methods and tools such as hand drawings, measuring tape and archaeological notes is time consuming. This process is prone to human errors and the quality of the documentation depends on the qualification of the archaeologist on site. Use of modern technology and methods in 3D surveying and 3D robotics facilitate and improve this process. Computer-aided systems and databases improve the documentation quality and increase the speed of data acquisition. 3D laser scanning is the state of the art in modelling archaeological excavation sites, historical sites and even entire cities or landscapes. Modern laser scanners are capable of data acquisition of up to 1 million points per second. This provides a very detailed 3D point cloud of the environment. 3D point clouds and 3D models of an excavation site provide a better representation of the environment for the archaeologist and for documentation. The point cloud can be used both for further studies on the excavation and for the presentation of results. This paper introduces a Computer aided system for labelling archaeological excavations in 3D (CASTLE3D). Consisting of a set of tools for recording and georeferencing the 3D data from an excavation site, CASTLE3D is a novel documentation approach in industrial archaeology. It provides a 2D and 3D visualisation of the data and an easy-to-use interface that enables the archaeologist to select regions of interest and to interact with the data in both representations. The 2D visualisation and a 3D orthogonal view of the data provide cuts of the environment that resemble the traditional hand drawings. The 3D perspective view gives a realistic view of the environment. CASTLE3D is designed as an easy-to-use on-site semantic mapping tool for archaeologists. Each project contains a predefined set of semantic information that can be used to label findings in the data. Multiple regions of interest can be joined under

  8. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  9. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  10. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  11. Evolution of Archaea in 3D modeling

    NASA Astrophysics Data System (ADS)

    Pikuta, Elena V.; Tankosic, Dragana; Sheldon, Rob

    2012-11-01

    The analysis of all groups of Archaea performed in two-dimensions have demonstrated a specific distribution of Archaean species as a function of pH/temperature, temperature/salinity and pH/salinity. Work presented here is an extension of this analysis with a three dimensional (3D) modeling in logarithmic scale. As it was shown in 2D representation, the "Rules of the Diagonal" have been expressed even more clearly in 3D modeling. In this article, we used a 3D Mesh modeling to show the range of distribution of each separate group of Archaea as a function of pH, temperature, and salinity. Visible overlap and links between different groups indicate a direction of evolution in Archaea. The major direction in ancestral life (vector of evolution) has been indicated: from high temperature, acidic, and low-salinity system towards low temperature, alkaline and high salinity systems. Specifics of the geometrical coordinates and distribution of separate groups of Archaea in 3 D scale were analyzed with a mathematical description of the functions. Based on the obtained data, a new model for the origin and evolution of life on Earth is proposed. The geometry of this model is described by a hyperboloid of one sheet. Conclusions of this research are consistent with previous results derived from the two-dimensional diagrams. This approach is suggested as a new method for analyzing any biological group in accordance to its environmental parameters.

  12. Sensing and compressing 3-D models

    SciTech Connect

    Krumm, J.

    1998-02-01

    The goal of this research project was to create a passive and robust computer vision system for producing 3-D computer models of arbitrary scenes. Although the authors were unsuccessful in achieving the overall goal, several components of this research have shown significant potential. Of particular interest is the application of parametric eigenspace methods for planar pose measurement of partially occluded objects in gray-level images. The techniques presented provide a simple, accurate, and robust solution to the planar pose measurement problem. In addition, the representational efficiency of eigenspace methods used with gray-level features were successfully extended to binary features, which are less sensitive to illumination changes. The results of this research are presented in two papers that were written during the course of this project. The papers are included in sections 2 and 3. The first section of this report summarizes the 3-D modeling efforts.

  13. 3D Elevation Program: summary for Vermont

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  14. 3D Elevation Program: summary for Nebraska

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  15. 3D Geo: An Alternative Approach

    NASA Astrophysics Data System (ADS)

    Georgopoulos, A.

    2016-10-01

    The expression GEO is mostly used to denote relation to the earth. However it should not be confined to what is related to the earth's surface, as other objects also need three dimensional representation and documentation, like cultural heritage objects. They include both tangible and intangible ones. In this paper the 3D data acquisition and 3D modelling of cultural heritage assets are briefly described and their significance is also highlighted. Moreover the organization of such information, related to monuments and artefacts, into relational data bases and its use for various purposes, other than just geometric documentation is also described and presented. In order to help the reader understand the above, several characteristic examples are presented and their methodology explained and their results evaluated.

  16. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  17. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  18. 3D-Measuring for Head Shape Covering Hair

    NASA Astrophysics Data System (ADS)

    Kato, Tsukasa; Hattori, Koosuke; Nomura, Takuya; Taguchi, Ryo; Hoguro, Masahiro; Umezaki, Taizo

    3D-Measuring is paid to attention because 3D-Display is making rapid spread. Especially, face and head are required to be measured because of necessary or contents production. However, it is a present problem that it is difficult to measure hair. Then, in this research, it is a purpose to measure face and hair with phase shift method. By using sine images arranged for hair measuring, the problems on hair measuring, dark color and reflection, are settled.

  19. Distributed 3D Information Visualization - Towards Integration of the Dynamic 3D Graphics and Web Services

    NASA Astrophysics Data System (ADS)

    Vucinic, Dean; Deen, Danny; Oanta, Emil; Batarilo, Zvonimir; Lacor, Chris

    This paper focuses on visualization and manipulation of graphical content in distributed network environments. The developed graphical middleware and 3D desktop prototypes were specialized for situational awareness. This research was done in the LArge Scale COllaborative decision support Technology (LASCOT) project, which explored and combined software technologies to support human-centred decision support system for crisis management (earthquake, tsunami, flooding, airplane or oil-tanker incidents, chemical, radio-active or other pollutants spreading, etc.). The performed state-of-the-art review did not identify any publicly available large scale distributed application of this kind. Existing proprietary solutions rely on the conventional technologies and 2D representations. Our challenge was to apply the "latest" available technologies, such Java3D, X3D and SOAP, compatible with average computer graphics hardware. The selected technologies are integrated and we demonstrate: the flow of data, which originates from heterogeneous data sources; interoperability across different operating systems and 3D visual representations to enhance the end-users interactions.

  20. INCORPORATING DYNAMIC 3D SIMULATION INTO PRA

    SciTech Connect

    Steven R Prescott; Curtis Smith

    2011-07-01

    Through continued advancement in computational resources, development that was previously done by trial and error production is now performed through computer simulation. These virtual physical representations have the potential to provide accurate and valid modeling results and are being used in many different technical fields. Risk assessment now has the opportunity to use 3D simulation to improve analysis results and insights, especially for external event analysis. By using simulations, the modeler only has to determine the likelihood of an event without having to also predict the results of that event. The 3D simulation automatically determines not only the outcome of the event, but when those failures occur. How can we effectively incorporate 3D simulation into traditional PRA? Most PRA plant modeling is made up of components with different failure modes, probabilities, and rates. Typically, these components are grouped into various systems and then are modeled together (in different combinations) as a “system” with logic structures to form fault trees. Applicable fault trees are combined through scenarios, typically represented by event tree models. Though this method gives us failure results for a given model, it has limitations when it comes to time-based dependencies or dependencies that are coupled to physical processes which may themselves be space- or time-dependent. Since, failures from a 3D simulation are naturally time related, they should be used in that manner. In our simulation approach, traditional static models are converted into an equivalent state diagram representation with start states, probabilistic driven movements between states and terminal states. As the state model is run repeatedly, it converges to the same results as the PRA model in cases where time-related factors are not important. In cases where timing considerations are important (e.g., when events are dependent upon each other), then the simulation approach will typically

  1. Effects of Presence, Copresence, and Flow on Learning Outcomes in 3D Learning Spaces

    ERIC Educational Resources Information Center

    Hassell, Martin D.; Goyal, Sandeep; Limayem, Moez; Boughzala, Imed

    2012-01-01

    The level of satisfaction and effectiveness of 3D virtual learning environments were examined. Additionally, 3D virtual learning environments were compared with face-to-face learning environments. Students that experienced higher levels of flow and presence also experienced more satisfaction but not necessarily more effectiveness with 3D virtual…

  2. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  3. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  4. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  5. LLNL-Earth3D

    SciTech Connect

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  6. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible.

  7. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  8. Bringing 3D Printing to Geophysical Science Education

    NASA Astrophysics Data System (ADS)

    Boghosian, A.; Turrin, M.; Porter, D. F.

    2014-12-01

    3D printing technology has been embraced by many technical fields, and is rapidly making its way into peoples' homes and schools. While there is a growing educational and hobbyist community engaged in the STEM focused technical and intellectual challenges associated with 3D printing, there is unrealized potential for the earth science community to use 3D printing to communicate scientific research to the public. Moreover, 3D printing offers scientists the opportunity to connect students and the public with novel visualizations of real data. As opposed to introducing terrestrial measurements through the use of colormaps and gradients, scientists can represent 3D concepts with 3D models, offering a more intuitive education tool. Furthermore, the tactile aspect of models make geophysical concepts accessible to a wide range of learning styles like kinesthetic or tactile, and learners including both visually impaired and color-blind students.We present a workflow whereby scientists, students, and the general public will be able to 3D print their own versions of geophysical datasets, even adding time through layering to include a 4th dimension, for a "4D" print. This will enable scientists with unique and expert insights into the data to easily create the tools they need to communicate their research. It will allow educators to quickly produce teaching aids for their students. Most importantly, it will enable the students themselves to translate the 2D representation of geophysical data into a 3D representation of that same data, reinforcing spatial reasoning.

  9. Towards a Normalised 3D Geovisualisation: The Viewpoint Management

    NASA Astrophysics Data System (ADS)

    Neuville, R.; Poux, F.; Hallot, P.; Billen, R.

    2016-10-01

    This paper deals with the viewpoint management in 3D environments considering an allocentric environment. The recent advances in computer sciences and the growing number of affordable remote sensors lead to impressive improvements in the 3D visualisation. Despite some research relating to the analysis of visual variables used in 3D environments, we notice that it lacks a real standardisation of 3D representation rules. In this paper we study the "viewpoint" as being the first considered parameter for a normalised visualisation of 3D data. Unlike in a 2D environment, the viewing direction is not only fixed in a top down direction in 3D. A non-optimal camera location means a poor 3D representation in terms of relayed information. Based on this statement we propose a model based on the analysis of the computational display pixels that determines a viewpoint maximising the relayed information according to one kind of query. We developed an OpenGL prototype working on screen pixels that allows to determine the optimal camera location based on a screen pixels colour algorithm. The viewpoint management constitutes a first step towards a normalised 3D geovisualisation.

  10. Creating 3D visualizations of MRI data: A brief guide

    PubMed Central

    Madan, Christopher R.

    2015-01-01

    While magnetic resonance imaging (MRI) data is itself 3D, it is often difficult to adequately present the results papers and slides in 3D. As a result, findings of MRI studies are often presented in 2D instead. A solution is to create figures that include perspective and can convey 3D information; such figures can sometimes be produced by standard functional magnetic resonance imaging (fMRI) analysis packages and related specialty programs. However, many options cannot provide functionality such as visualizing activation clusters that are both cortical and subcortical (i.e., a 3D glass brain), the production of several statistical maps with an identical perspective in the 3D rendering, or animated renderings. Here I detail an approach for creating 3D visualizations of MRI data that satisfies all of these criteria. Though a 3D ‘glass brain’ rendering can sometimes be difficult to interpret, they are useful in showing a more overall representation of the results, whereas the traditional slices show a more local view. Combined, presenting both 2D and 3D representations of MR images can provide a more comprehensive view of the study’s findings. PMID:26594340

  11. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery. PMID:26657435

  12. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  13. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery.

  14. Parallel CARLOS-3D code development

    SciTech Connect

    Putnam, J.M.; Kotulski, J.D.

    1996-02-01

    CARLOS-3D is a three-dimensional scattering code which was developed under the sponsorship of the Electromagnetic Code Consortium, and is currently used by over 80 aerospace companies and government agencies. The code has been extensively validated and runs on both serial workstations and parallel super computers such as the Intel Paragon. CARLOS-3D is a three-dimensional surface integral equation scattering code based on a Galerkin method of moments formulation employing Rao- Wilton-Glisson roof-top basis for triangular faceted surfaces. Fully arbitrary 3D geometries composed of multiple conducting and homogeneous bulk dielectric materials can be modeled. This presentation describes some of the extensions to the CARLOS-3D code, and how the operator structure of the code facilitated these improvements. Body of revolution (BOR) and two-dimensional geometries were incorporated by simply including new input routines, and the appropriate Galerkin matrix operator routines. Some additional modifications were required in the combined field integral equation matrix generation routine due to the symmetric nature of the BOR and 2D operators. Quadrilateral patched surfaces with linear roof-top basis functions were also implemented in the same manner. Quadrilateral facets and triangular facets can be used in combination to more efficiently model geometries with both large smooth surfaces and surfaces with fine detail such as gaps and cracks. Since the parallel implementation in CARLOS-3D is at high level, these changes were independent of the computer platform being used. This approach minimizes code maintenance, while providing capabilities with little additional effort. Results are presented showing the performance and accuracy of the code for some large scattering problems. Comparisons between triangular faceted and quadrilateral faceted geometry representations will be shown for some complex scatterers.

  15. Motion estimation in the 3-D Gabor domain.

    PubMed

    Feng, Mu; Reed, Todd R

    2007-08-01

    Motion estimation methods can be broadly classified as being spatiotemporal or frequency domain in nature. The Gabor representation is an analysis framework providing localized frequency information. When applied to image sequences, the 3-D Gabor representation displays spatiotemporal/spatiotemporal-frequency (st/stf) information, enabling the application of robust frequency domain methods with adjustable spatiotemporal resolution. In this work, the 3-D Gabor representation is applied to motion analysis. We demonstrate that piecewise uniform translational motion can be estimated by using a uniform translation motion model in the st/stf domain. The resulting motion estimation method exhibits both good spatiotemporal resolution and substantial noise resistance compared to existing spatiotemporal methods. To form the basis of this model, we derive the signature of the translational motion in the 3-D Gabor domain. Finally, to obtain higher spatiotemporal resolution for more complex motions, a dense motion field estimation method is developed to find a motion estimate for every pixel in the sequence.

  16. 3D Human Motion Editing and Synthesis: A Survey

    PubMed Central

    Wang, Xin; Chen, Qiudi; Wang, Wanliang

    2014-01-01

    The ways to compute the kinematics and dynamic quantities of human bodies in motion have been studied in many biomedical papers. This paper presents a comprehensive survey of 3D human motion editing and synthesis techniques. Firstly, four types of methods for 3D human motion synthesis are introduced and compared. Secondly, motion capture data representation, motion editing, and motion synthesis are reviewed successively. Finally, future research directions are suggested. PMID:25045395

  17. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  18. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The VAX/VMS/DISSPLA implementation of PLOT3D supports 2-D polygons as

  19. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P. G.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The VAX/VMS/DISSPLA implementation of PLOT3D supports 2-D polygons as

  20. Spatial representation in face drawing and block design by nine groups from hunter-gatherers to literates.

    PubMed

    Pontius, A A

    1997-12-01

    A rank-order correlation was performed for nine cultural groups ranging from preliterate hunter-gatherers to literate medium-city dwellers. Two spatial tests of intrapattern spatial relations were used, the Draw-A-Person-With-Fade-In-Front test and the Kohs Block Design, a test of constructive praxia. In contrast to traditional "Western" evaluations, credit was given for the preservation of the essential intrapattern shapes even when exact spatial relations among these shapes was incorrect. Such "errors" were labelled "neolithic face" patterns and "nonrandom errors," respectively. Analysis suggested that the neglected intrapattern (in contrast to interobject) spatial relational skills emerge with literacy but is not yet actualized in preliterates whose survival requires quick fight or flight response upon prompt, albeit gross, assessment of salient shapes of prey or predators (human or animals). The positive Spearman rank-order correlation of absent or low literacy skills with the percent of "neolithic face" drawings was .95 and with the "nonrandom" block designs .67. Suggestions were developed for assessing certain unusual "ecological" present situations or certain brain dysfunctions.

  1. Emerging Applications of Bedside 3D Printing in Plastic Surgery.

    PubMed

    Chae, Michael P; Rozen, Warren M; McMenamin, Paul G; Findlay, Michael W; Spychal, Robert T; Hunter-Smith, David J

    2015-01-01

    Modern imaging techniques are an essential component of preoperative planning in plastic and reconstructive surgery. However, conventional modalities, including three-dimensional (3D) reconstructions, are limited by their representation on 2D workstations. 3D printing, also known as rapid prototyping or additive manufacturing, was once the province of industry to fabricate models from a computer-aided design (CAD) in a layer-by-layer manner. The early adopters in clinical practice have embraced the medical imaging-guided 3D-printed biomodels for their ability to provide tactile feedback and a superior appreciation of visuospatial relationship between anatomical structures. With increasing accessibility, investigators are able to convert standard imaging data into a CAD file using various 3D reconstruction softwares and ultimately fabricate 3D models using 3D printing techniques, such as stereolithography, multijet modeling, selective laser sintering, binder jet technique, and fused deposition modeling. However, many clinicians have questioned whether the cost-to-benefit ratio justifies its ongoing use. The cost and size of 3D printers have rapidly decreased over the past decade in parallel with the expiration of key 3D printing patents. Significant improvements in clinical imaging and user-friendly 3D software have permitted computer-aided 3D modeling of anatomical structures and implants without outsourcing in many cases. These developments offer immense potential for the application of 3D printing at the bedside for a variety of clinical applications. In this review, existing uses of 3D printing in plastic surgery practice spanning the spectrum from templates for facial transplantation surgery through to the formation of bespoke craniofacial implants to optimize post-operative esthetics are described. Furthermore, we discuss the potential of 3D printing to become an essential office-based tool in plastic surgery to assist in preoperative planning, developing

  2. Emerging Applications of Bedside 3D Printing in Plastic Surgery

    PubMed Central

    Chae, Michael P.; Rozen, Warren M.; McMenamin, Paul G.; Findlay, Michael W.; Spychal, Robert T.; Hunter-Smith, David J.

    2015-01-01

    Modern imaging techniques are an essential component of preoperative planning in plastic and reconstructive surgery. However, conventional modalities, including three-dimensional (3D) reconstructions, are limited by their representation on 2D workstations. 3D printing, also known as rapid prototyping or additive manufacturing, was once the province of industry to fabricate models from a computer-aided design (CAD) in a layer-by-layer manner. The early adopters in clinical practice have embraced the medical imaging-guided 3D-printed biomodels for their ability to provide tactile feedback and a superior appreciation of visuospatial relationship between anatomical structures. With increasing accessibility, investigators are able to convert standard imaging data into a CAD file using various 3D reconstruction softwares and ultimately fabricate 3D models using 3D printing techniques, such as stereolithography, multijet modeling, selective laser sintering, binder jet technique, and fused deposition modeling. However, many clinicians have questioned whether the cost-to-benefit ratio justifies its ongoing use. The cost and size of 3D printers have rapidly decreased over the past decade in parallel with the expiration of key 3D printing patents. Significant improvements in clinical imaging and user-friendly 3D software have permitted computer-aided 3D modeling of anatomical structures and implants without outsourcing in many cases. These developments offer immense potential for the application of 3D printing at the bedside for a variety of clinical applications. In this review, existing uses of 3D printing in plastic surgery practice spanning the spectrum from templates for facial transplantation surgery through to the formation of bespoke craniofacial implants to optimize post-operative esthetics are described. Furthermore, we discuss the potential of 3D printing to become an essential office-based tool in plastic surgery to assist in preoperative planning, developing

  3. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The UNIX/DISSPLA implementation of PLOT3D supports 2-D polygons as

  4. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The UNIX/DISSPLA implementation of PLOT3D supports 2-D polygons as

  5. Contacts de langues et representations (Language Contacts and Representations).

    ERIC Educational Resources Information Center

    Matthey, Marinette, Ed.

    1997-01-01

    Essays on language contact and the image of language, entirely in French, include: "Representations 'du' contexte et representations 'en' contexte? Eleves et enseignants face a l'apprentissage de la langue" ("Representations 'of' Context or Representations 'in' Context? Students and Teachers Facing Language Learning" (Laurent Gajo); "Le crepuscule…

  6. Spatially resolved 3D noise

    NASA Astrophysics Data System (ADS)

    Haefner, David P.; Preece, Bradley L.; Doe, Joshua M.; Burks, Stephen D.

    2016-05-01

    When evaluated with a spatially uniform irradiance, an imaging sensor exhibits both spatial and temporal variations, which can be described as a three-dimensional (3D) random process considered as noise. In the 1990s, NVESD engineers developed an approximation to the 3D power spectral density (PSD) for noise in imaging systems known as 3D noise. In this correspondence, we describe how the confidence intervals for the 3D noise measurement allows for determination of the sampling necessary to reach a desired precision. We then apply that knowledge to create a smaller cube that can be evaluated spatially across the 2D image giving the noise as a function of position. The method presented here allows for both defective pixel identification and implements the finite sampling correction matrix. In support of the reproducible research effort, the Matlab functions associated with this work can be found on the Mathworks file exchange [1].

  7. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  8. Accepting the T3D

    SciTech Connect

    Rich, D.O.; Pope, S.C.; DeLapp, J.G.

    1994-10-01

    In April, a 128 PE Cray T3D was installed at Los Alamos National Laboratory`s Advanced Computing Laboratory as part of the DOE`s High-Performance Parallel Processor Program (H4P). In conjunction with CRI, the authors implemented a 30 day acceptance test. The test was constructed in part to help them understand the strengths and weaknesses of the T3D. In this paper, they briefly describe the H4P and its goals. They discuss the design and implementation of the T3D acceptance test and detail issues that arose during the test. They conclude with a set of system requirements that must be addressed as the T3D system evolves.

  9. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  10. Video coding and transmission standards for 3D television — a survey

    NASA Astrophysics Data System (ADS)

    Buchowicz, A.

    2013-03-01

    The emerging 3D television systems require effective techniques for transmission and storage of data representing a 3-D scene. The 3-D scene representations based on multiple video sequences or multiple views plus depth maps are especially important since they can be processed with existing video technologies. The review of the video coding and transmission techniques is presented in this paper.

  11. Coupled Dictionary Learning for the Detail-Enhanced Synthesis of 3-D Facial Expressions.

    PubMed

    Liang, Haoran; Liang, Ronghua; Song, Mingli; He, Xiaofei

    2016-04-01

    The desire to reconstruct 3-D face models with expressions from 2-D face images fosters increasing interest in addressing the problem of face modeling. This task is important and challenging in the field of computer animation. Facial contours and wrinkles are essential to generate a face with a certain expression; however, these details are generally ignored or are not seriously considered in previous studies on face model reconstruction. Thus, we employ coupled radius basis function networks to derive an intermediate 3-D face model from a single 2-D face image. To optimize the 3-D face model further through landmarks, a coupled dictionary that is related to 3-D face models and their corresponding 3-D landmarks is learned from the given training set through local coordinate coding. Another coupled dictionary is then constructed to bridge the 2-D and 3-D landmarks for the transfer of vertices on the face model. As a result, the final 3-D face can be generated with the appropriate expression. In the testing phase, the 2-D input faces are converted into 3-D models that display different expressions. Experimental results indicate that the proposed approach to facial expression synthesis can obtain model details more effectively than previous methods can.

  12. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In addition to providing the advantages of performing complex

  13. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In addition to providing the advantages of performing complex

  14. An Effective 3D Ear Acquisition System

    PubMed Central

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition. PMID:26061553

  15. An Effective 3D Ear Acquisition System.

    PubMed

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition.

  16. An Effective 3D Ear Acquisition System.

    PubMed

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition. PMID:26061553

  17. LASTRAC.3d: Transition Prediction in 3D Boundary Layers

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2004-01-01

    Langley Stability and Transition Analysis Code (LASTRAC) is a general-purpose, physics-based transition prediction code released by NASA for laminar flow control studies and transition research. This paper describes the LASTRAC extension to general three-dimensional (3D) boundary layers such as finite swept wings, cones, or bodies at an angle of attack. The stability problem is formulated by using a body-fitted nonorthogonal curvilinear coordinate system constructed on the body surface. The nonorthogonal coordinate system offers a variety of marching paths and spanwise waveforms. In the extreme case of an infinite swept wing boundary layer, marching with a nonorthogonal coordinate produces identical solutions to those obtained with an orthogonal coordinate system using the earlier release of LASTRAC. Several methods to formulate the 3D parabolized stability equations (PSE) are discussed. A surface-marching procedure akin to that for 3D boundary layer equations may be used to solve the 3D parabolized disturbance equations. On the other hand, the local line-marching PSE method, formulated as an easy extension from its 2D counterpart and capable of handling the spanwise mean flow and disturbance variation, offers an alternative. A linear stability theory or parabolized stability equations based N-factor analysis carried out along the streamline direction with a fixed wavelength and downstream-varying spanwise direction constitutes an efficient engineering approach to study instability wave evolution in a 3D boundary layer. The surface-marching PSE method enables a consistent treatment of the disturbance evolution along both streamwise and spanwise directions but requires more stringent initial conditions. Both PSE methods and the traditional LST approach are implemented in the LASTRAC.3d code. Several test cases for tapered or finite swept wings and cones at an angle of attack are discussed.

  18. Thermal 3D modeling system based on 3-view geometry

    NASA Astrophysics Data System (ADS)

    Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-11-01

    In this paper, we propose a novel thermal three-dimensional (3D) modeling system that includes 3D shape, visual, and thermal infrared information and solves a registration problem among these three types of information. The proposed system consists of a projector, a visual camera and, a thermal camera (PVT). To generate 3D shape information, we use a structured light technique, which consists of a visual camera and a projector. A thermal camera is added to the structured light system in order to provide thermal information. To solve the correspondence problem between the three sensors, we use three-view geometry. Finally, we obtain registered PVT data, which includes visual, thermal, and 3D shape information. Among various potential applications such as industrial measurements, biological experiments, military usage, and so on, we have adapted the proposed method to biometrics, particularly for face recognition. With the proposed method, we obtain multi-modal 3D face data that includes not only textural information but also data regarding head pose, 3D shape, and thermal information. Experimental results show that the performance of the proposed face recognition system is not limited by head pose variation which is a serious problem in face recognition.

  19. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In each of these areas, the IRIS implementation of PLOT3D offers

  20. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In each of these areas, the IRIS implementation of PLOT3D offers

  1. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  2. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  3. Remote 3D Medical Consultation

    NASA Astrophysics Data System (ADS)

    Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M.

    Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.

  4. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  5. 3D-Printed Microfluidics.

    PubMed

    Au, Anthony K; Huynh, Wilson; Horowitz, Lisa F; Folch, Albert

    2016-03-14

    The advent of soft lithography allowed for an unprecedented expansion in the field of microfluidics. However, the vast majority of PDMS microfluidic devices are still made with extensive manual labor, are tethered to bulky control systems, and have cumbersome user interfaces, which all render commercialization difficult. On the other hand, 3D printing has begun to embrace the range of sizes and materials that appeal to the developers of microfluidic devices. Prior to fabrication, a design is digitally built as a detailed 3D CAD file. The design can be assembled in modules by remotely collaborating teams, and its mechanical and fluidic behavior can be simulated using finite-element modeling. As structures are created by adding materials without the need for etching or dissolution, processing is environmentally friendly and economically efficient. We predict that in the next few years, 3D printing will replace most PDMS and plastic molding techniques in academia.

  6. Improving 3d Spatial Queries Search: Newfangled Technique of Space Filling Curves in 3d City Modeling

    NASA Astrophysics Data System (ADS)

    Uznir, U.; Anton, F.; Suhaibah, A.; Rahman, A. A.; Mioc, D.

    2013-09-01

    The advantages of three dimensional (3D) city models can be seen in various applications including photogrammetry, urban and regional planning, computer games, etc.. They expand the visualization and analysis capabilities of Geographic Information Systems on cities, and they can be developed using web standards. However, these 3D city models consume much more storage compared to two dimensional (2D) spatial data. They involve extra geometrical and topological information together with semantic data. Without a proper spatial data clustering method and its corresponding spatial data access method, retrieving portions of and especially searching these 3D city models, will not be done optimally. Even though current developments are based on an open data model allotted by the Open Geospatial Consortium (OGC) called CityGML, its XML-based structure makes it challenging to cluster the 3D urban objects. In this research, we propose an opponent data constellation technique of space-filling curves (3D Hilbert curves) for 3D city model data representation. Unlike previous methods, that try to project 3D or n-dimensional data down to 2D or 3D using Principal Component Analysis (PCA) or Hilbert mappings, in this research, we extend the Hilbert space-filling curve to one higher dimension for 3D city model data implementations. The query performance was tested using a CityGML dataset of 1,000 building blocks and the results are presented in this paper. The advantages of implementing space-filling curves in 3D city modeling will improve data retrieval time by means of optimized 3D adjacency, nearest neighbor information and 3D indexing. The Hilbert mapping, which maps a subinterval of the [0, 1] interval to the corresponding portion of the d-dimensional Hilbert's curve, preserves the Lebesgue measure and is Lipschitz continuous. Depending on the applications, several alternatives are possible in order to cluster spatial data together in the third dimension compared to its

  7. A technique for 3-D robot vision for space applications

    NASA Technical Reports Server (NTRS)

    Markandey, V.; Tagare, H.; Defigueiredo, R. J. P.

    1987-01-01

    An extension of the MIAG algorithm for recognition and motion parameter determination of general 3-D polyhedral objects based on model matching techniques and using Moment Invariants as features of object representation is discussed. Results of tests conducted on the algorithm under conditions simulating space conditions are presented.

  8. Demonstration of a 3D vision algorithm for space applications

    NASA Technical Reports Server (NTRS)

    Defigueiredo, Rui J. P. (Editor)

    1987-01-01

    This paper reports an extension of the MIAG algorithm for recognition and motion parameter determination of general 3-D polyhedral objects based on model matching techniques and using movement invariants as features of object representation. Results of tests conducted on the algorithm under conditions simulating space conditions are presented.

  9. Communicating Experience of 3D Space: Mathematical and Everyday Discourse

    ERIC Educational Resources Information Center

    Morgan, Candia; Alshwaikh, Jehad

    2012-01-01

    In this article we consider data arising from student-teacher-researcher interactions taking place in the context of an experimental teaching program making use of multiple modes of communication and representation to explore three-dimensional (3D) shape. As teachers/researchers attempted to support student use of a logo-like formal language for…

  10. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In addition to providing the advantages of performing complex

  11. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In each of these areas, the IRIS implementation of PLOT3D offers

  12. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In each of these areas, the IRIS implementation of PLOT3D offers

  13. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In addition to providing the advantages of performing complex

  14. NoSQL Based 3D City Model Management System

    NASA Astrophysics Data System (ADS)

    Mao, B.; Harrie, L.; Cao, J.; Wu, Z.; Shen, J.

    2014-04-01

    To manage increasingly complicated 3D city models, a framework based on NoSQL database is proposed in this paper. The framework supports import and export of 3D city model according to international standards such as CityGML, KML/COLLADA and X3D. We also suggest and implement 3D model analysis and visualization in the framework. For city model analysis, 3D geometry data and semantic information (such as name, height, area, price and so on) are stored and processed separately. We use a Map-Reduce method to deal with the 3D geometry data since it is more complex, while the semantic analysis is mainly based on database query operation. For visualization, a multiple 3D city representation structure CityTree is implemented within the framework to support dynamic LODs based on user viewpoint. Also, the proposed framework is easily extensible and supports geoindexes to speed up the querying. Our experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.

  15. Optimization Techniques for 3D Graphics Deployment on Mobile Devices

    NASA Astrophysics Data System (ADS)

    Koskela, Timo; Vatjus-Anttila, Jarkko

    2015-03-01

    3D Internet technologies are becoming essential enablers in many application areas including games, education, collaboration, navigation and social networking. The use of 3D Internet applications with mobile devices provides location-independent access and richer use context, but also performance issues. Therefore, one of the important challenges facing 3D Internet applications is the deployment of 3D graphics on mobile devices. In this article, we present an extensive survey on optimization techniques for 3D graphics deployment on mobile devices and qualitatively analyze the applicability of each technique from the standpoints of visual quality, performance and energy consumption. The analysis focuses on optimization techniques related to data-driven 3D graphics deployment, because it supports off-line use, multi-user interaction, user-created 3D graphics and creation of arbitrary 3D graphics. The outcome of the analysis facilitates the development and deployment of 3D Internet applications on mobile devices and provides guidelines for future research.

  16. 3D Model Generation From the Engineering Drawing

    NASA Astrophysics Data System (ADS)

    Vaský, Jozef; Eliáš, Michal; Bezák, Pavol; Červeňanská, Zuzana; Izakovič, Ladislav

    2010-01-01

    The contribution deals with the transformation of engineering drawings in a paper form into a 3D computer representation. A 3D computer model can be further processed in CAD/CAM system, it can be modified, archived, and a technical drawing can be then generated from it as well. The transformation process from paper form to the data one is a complex and difficult one, particularly owing to the different types of drawings, forms of displayed objects and encountered errors and deviations from technical standards. The algorithm for 3D model generating from an orthogonal vector input representing a simplified technical drawing of the rotational part is described in this contribution. The algorithm was experimentally implemented as ObjectARX application in the AutoCAD system and the test sample as the representation of the rotational part was used for verificaton.

  17. DspaceOgreTerrain 3D Terrain Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan; Pomerantz, Marc I.

    2012-01-01

    DspaceOgreTerrain is an extension to the DspaceOgre 3D visualization tool that supports real-time visualization of various terrain types, including digital elevation maps, planets, and meshes. DspaceOgreTerrain supports creating 3D representations of terrains and placing them in a scene graph. The 3D representations allow for a continuous level of detail, GPU-based rendering, and overlaying graphics like wheel tracks and shadows. It supports reading data from the SimScape terrain- modeling library. DspaceOgreTerrain solves the problem of displaying the results of simulations that involve very large terrains. In the past, it has been used to visualize simulations of vehicle traverses on Lunar and Martian terrains. These terrains were made up of billions of vertices and would not have been renderable in real-time without using a continuous level of detail rendering technique.

  18. Teaching 3-D Geometry--The Multi Representational Way

    ERIC Educational Resources Information Center

    Kalbitzer, Sonja; Loong, Esther

    2013-01-01

    Many students have difficulties in geometric and spatial thinking (see Pittalis & Christou, 2010). Students who are asked to construct models of geometric thought not previously learnt may be forced into rote learning and only gain temporary or superficial success (Van de Walle & Folk, 2008, p. 431). Therefore it is imperative for…

  19. Object-oriented urban 3D spatial data model organization method

    NASA Astrophysics Data System (ADS)

    Li, Jing-wen; Li, Wen-qing; Lv, Nan; Su, Tao

    2015-12-01

    This paper combined the 3d data model with object-oriented organization method, put forward the model of 3d data based on object-oriented method, implemented the city 3d model to quickly build logical semantic expression and model, solved the city 3d spatial information representation problem of the same location with multiple property and the same property with multiple locations, designed the space object structure of point, line, polygon, body for city of 3d spatial database, and provided a new thought and method for the city 3d GIS model and organization management.

  20. A Prototype Digital Library for 3D Collections: Tools To Capture, Model, Analyze, and Query Complex 3D Data.

    ERIC Educational Resources Information Center

    Rowe, Jeremy; Razdan, Anshuman

    The Partnership for Research in Spatial Modeling (PRISM) project at Arizona State University (ASU) developed modeling and analytic tools to respond to the limitations of two-dimensional (2D) data representations perceived by affiliated discipline scientists, and to take advantage of the enhanced capabilities of three-dimensional (3D) data that…

  1. Making Inexpensive 3-D Models

    ERIC Educational Resources Information Center

    Manos, Harry

    2016-01-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…

  2. 3D Printing: Exploring Capabilities

    ERIC Educational Resources Information Center

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  3. Methodology of the determination of the uncertainties by using the biometric device the broadway 3D

    NASA Astrophysics Data System (ADS)

    Jasek, Roman; Talandova, Hana; Adamek, Milan

    2016-06-01

    The biometric identification by face is among one of the most widely used methods of biometric identification. Due to it provides a faster and more accurate identification; it was implemented into area of security 3D face reader by Broadway manufacturer was used to measure. It is equipped with the 3D camera system, which uses the method of structured light scanning and saves the template into the 3D model of face. The obtained data were evaluated by software Turnstile Enrolment Application (TEA). The measurements were used 3D face reader the Broadway 3D. First, the person was scanned and stored in the database. Thereafter person has already been compared with the stored template in the database for each method. Finally, a measure of reliability was evaluated for the Broadway 3D face reader.

  4. Virtual 3d City Modeling: Techniques and Applications

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  5. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  6. 3D Printable Graphene Composite.

    PubMed

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-08

    In human being's history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today's personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite's linear thermal coefficient is below 75 ppm·°C(-1) from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  7. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  8. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  9. 3D light scanning macrography.

    PubMed

    Huber, D; Keller, M; Robert, D

    2001-08-01

    The technique of 3D light scanning macrography permits the non-invasive surface scanning of small specimens at magnifications up to 200x. Obviating both the problem of limited depth of field inherent to conventional close-up macrophotography and the metallic coating required by scanning electron microscopy, 3D light scanning macrography provides three-dimensional digital images of intact specimens without the loss of colour, texture and transparency information. This newly developed technique offers a versatile, portable and cost-efficient method for the non-invasive digital and photographic documentation of small objects. Computer controlled device operation and digital image acquisition facilitate fast and accurate quantitative morphometric investigations, and the technique offers a broad field of research and educational applications in biological, medical and materials sciences. PMID:11489078

  10. Recent Advances in Visualizing 3D Flow with LIC

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria; Grosch, Chester

    1998-01-01

    Line Integral Convolution (LIC), introduced by Cabral and Leedom in 1993, is an elegant and versatile technique for representing directional information via patterns of correlation in a texture. Although most commonly used to depict 2D flow, or flow over a surface in 3D, LIC methods can equivalently be used to portray 3D flow through a volume. However, the popularity of LIC as a device for illustrating 3D flow has historically been limited both by the computational expense of generating and rendering such a 3D texture and by the difficulties inherent in clearly and effectively conveying the directional information embodied in the volumetric output textures that are produced. In an earlier paper, we briefly discussed some of the factors that may underlie the perceptual difficulties that we can encounter with dense 3D displays and outlined several strategies for more effectively visualizing 3D flow with volume LIC. In this article, we review in more detail techniques for selectively emphasizing critical regions of interest in a flow and for facilitating the accurate perception of the 3D depth and orientation of overlapping streamlines, and we demonstrate new methods for efficiently incorporating an indication of orientation into a flow representation and for conveying additional information about related scalar quantities such as temperature or vorticity over a flow via subtle, continuous line width and color variations.

  11. Automatic Reconstruction of Spacecraft 3D Shape from Imagery

    NASA Astrophysics Data System (ADS)

    Poelman, C.; Radtke, R.; Voorhees, H.

    We describe a system that computes the three-dimensional (3D) shape of a spacecraft from a sequence of uncalibrated, two-dimensional images. While the mathematics of multi-view geometry is well understood, building a system that accurately recovers 3D shape from real imagery remains an art. A novel aspect of our approach is the combination of algorithms from computer vision, photogrammetry, and computer graphics. We demonstrate our system by computing spacecraft models from imagery taken by the Air Force Research Laboratory's XSS-10 satellite and DARPA's Orbital Express satellite. Using feature tie points (each identified in two or more images), we compute the relative motion of each frame and the 3D location of each feature using iterative linear factorization followed by non-linear bundle adjustment. The "point cloud" that results from this traditional shape-from-motion approach is typically too sparse to generate a detailed 3D model. Therefore, we use the computed motion solution as input to a volumetric silhouette-carving algorithm, which constructs a solid 3D model based on viewpoint consistency with the image frames. The resulting voxel model is then converted to a facet-based surface representation and is texture-mapped, yielding realistic images from arbitrary viewpoints. We also illustrate other applications of the algorithm, including 3D mensuration and stereoscopic 3D movie generation.

  12. DYNA3D. Explicit 3-d Hydrodynamic FEM Program

    SciTech Connect

    Whirley, R.G.; Englemann, B.E. )

    1993-11-30

    DYNA3D is an explicit, three-dimensional, finite element program for analyzing the large deformation dynamic response of inelastic solids and structures. DYNA3D contains 30 material models and 10 equations of state (EOS) to cover a wide range of material behavior. The material models implemented are: elastic, orthotropic elastic, kinematic/isotropic plasticity, thermoelastoplastic, soil and crushable foam, linear viscoelastic, Blatz-Ko rubber, high explosive burn, hydrodynamic without deviatoric stresses, elastoplastic hydrodynamic, temperature-dependent elastoplastic, isotropic elastoplastic, isotropic elastoplastic with failure, soil and crushable foam with failure, Johnson/Cook plasticity model, pseudo TENSOR geological model, elastoplastic with fracture, power law isotropic plasticity, strain rate dependent plasticity, rigid, thermal orthotropic, composite damage model, thermal orthotropic with 12 curves, piecewise linear isotropic plasticity, inviscid two invariant geologic cap, orthotropic crushable model, Moonsy-Rivlin rubber, resultant plasticity, closed form update shell plasticity, and Frazer-Nash rubber model. The hydrodynamic material models determine only the deviatoric stresses. Pressure is determined by one of 10 equations of state including linear polynomial, JWL high explosive, Sack Tuesday high explosive, Gruneisen, ratio of polynomials, linear polynomial with energy deposition, ignition and growth of reaction in HE, tabulated compaction, tabulated, and TENSOR pore collapse. DYNA3D generates three binary output databases. One contains information for complete states at infrequent intervals; 50 to 100 states is typical. The second contains information for a subset of nodes and elements at frequent intervals; 1,000 to 10,000 states is typical. The last contains interface data for contact surfaces.

  13. GPU-Accelerated Denoising in 3D (GD3D)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  14. A Modified Exoskeleton for 3D Shape Description and Recognition

    NASA Astrophysics Data System (ADS)

    Lipikorn, Rajalida; Shimizu, Akinobu; Hagihara, Yoshihiro; Kobatake, Hidefumi

    Three-dimensional(3D) shape representation is a powerful tool in object recognition that is an essential process in an image processing and analysis system. Skeleton is one of the most widely used representations for object recognition, nevertheless most of the skeletons obtained from conventional methods are susceptible to rotation and noise disturbances. In this paper, we present a new 3D object representation called a modified exoskeleton (mES) which preserves skeleton properties including significant characteristics about an object that are meaningful for object recognition, and is more stable and less susceptible to rotation and noise than the skeletons. Then a 3D shape recognition methodology which determines the similarity between an observed object and other known objects in a database is introduced. Through a number of experiments on 3D artificial objects and real volumetric lung tumors extracted from CT images, it can be verified that our proposed methodology based on the mES is a simple yet efficient method that is less sensitive to rotation, noise, and independent of orientation and size of the objects.

  15. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  16. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    NASA Astrophysics Data System (ADS)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  17. 3D printed rapid disaster response

    NASA Astrophysics Data System (ADS)

    Lacaze, Alberto; Murphy, Karl; Mottern, Edward; Corley, Katrina; Chu, Kai-Dee

    2014-05-01

    Under the Department of Homeland Security-sponsored Sensor-smart Affordable Autonomous Robotic Platforms (SAARP) project, Robotic Research, LLC is developing an affordable and adaptable method to provide disaster response robots developed with 3D printer technology. The SAARP Store contains a library of robots, a developer storefront, and a user storefront. The SAARP Store allows the user to select, print, assemble, and operate the robot. In addition to the SAARP Store, two platforms are currently being developed. They use a set of common non-printed components that will allow the later design of other platforms that share non-printed components. During disasters, new challenges are faced that require customized tools or platforms. Instead of prebuilt and prepositioned supplies, a library of validated robots will be catalogued to satisfy various challenges at the scene. 3D printing components will allow these customized tools to be deployed in a fraction of the time that would normally be required. While the current system is focused on supporting disaster response personnel, this system will be expandable to a range of customers, including domestic law enforcement, the armed services, universities, and research facilities.

  18. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  19. What Lies Ahead (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D cylindrical-perspective mosaic taken by the navigation camera on the Mars Exploration Rover Spirit on sol 82 shows the view south of the large crater dubbed 'Bonneville.' The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.

  20. Making Inexpensive 3-D Models

    NASA Astrophysics Data System (ADS)

    Manos, Harry

    2016-03-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the TPT theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity well tailored to specific class lessons. Most of the supplies are readily available in the home or at school: rubbing alcohol, a rag, two colors of spray paint, art brushes, and masking tape. The cost of these supplies, if you don't have them, is less than 20.

  1. 3D Printed Shelby Cobra

    SciTech Connect

    Love, Lonnie

    2015-01-09

    ORNL's newly printed 3D Shelby Cobra was showcased at the 2015 NAIAS in Detroit. This "laboratory on wheels" uses the Shelby Cobra design, celebrating the 50th anniversary of this model and honoring the first vehicle to be voted a national monument. The Shelby was printed at the Department of Energy’s Manufacturing Demonstration Facility at ORNL using the BAAM (Big Area Additive Manufacturing) machine and is intended as a “plug-n-play” laboratory on wheels. The Shelby will allow research and development of integrated components to be tested and enhanced in real time, improving the use of sustainable, digital manufacturing solutions in the automotive industry.

  2. Towards Contactless, Low-Cost and Accurate 3D Fingerprint Identification.

    PubMed

    Kumar, Ajay; Kwong, Cyril

    2015-03-01

    Human identification using fingerprint impressions has been widely studied and employed for more than 2000 years. Despite new advancements in the 3D imaging technologies, widely accepted representation of 3D fingerprint features and matching methodology is yet to emerge. This paper investigates 3D representation of widely employed 2D minutiae features by recovering and incorporating (i) minutiae height z and (ii) its 3D orientation φ information and illustrates an effective matching strategy for matching popular minutiae features extended in 3D space. One of the obstacles of the emerging 3D fingerprint identification systems to replace the conventional 2D fingerprint system lies in their bulk and high cost, which is mainly contributed from the usage of structured lighting system or multiple cameras. This paper attempts to addresses such key limitations of the current 3D fingerprint technologies bydeveloping the single camera-based 3D fingerprint identification system. We develop a generalized 3D minutiae matching model and recover extended 3D fingerprint features from the reconstructed 3D fingerprints. 2D fingerprint images acquired for the 3D fingerprint reconstruction can themselves be employed for the performance improvement and have been illustrated in the work detailed in this paper. This paper also attempts to answer one of the most fundamental questions on the availability of inherent discriminable information from 3D fingerprints. The experimental results are presented on a database of 240 clients 3D fingerprints, which is made publicly available to further research efforts in this area, and illustrate the discriminant power of 3D minutiae representation and matching to achieve performance improvement.

  3. A novel window based method for approximating the Hausdorff in 3D range imagery.

    SciTech Connect

    Koch, Mark William

    2004-10-01

    Matching a set of 3D points to another set of 3D points is an important part of any 3D object recognition system. The Hausdorff distance is known for it robustness in the face of obscuration, clutter, and noise. We show how to approximate the 3D Hausdorff fraction with linear time complexity and quadratic space complexity. We empirically demonstrate that the approximation is very good when compared to actual Hausdorff distances.

  4. Positional Awareness Map 3D (PAM3D)

    NASA Technical Reports Server (NTRS)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  5. Image based 3D city modeling : Comparative study

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  6. 3D acoustic atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Rogers, Kevin; Finn, Anthony

    2014-10-01

    This paper presents a method for tomographically reconstructing spatially varying 3D atmospheric temperature profiles and wind velocity fields based. Measurements of the acoustic signature measured onboard a small Unmanned Aerial Vehicle (UAV) are compared to ground-based observations of the same signals. The frequency-shifted signal variations are then used to estimate the acoustic propagation delay between the UAV and the ground microphones, which are also affected by atmospheric temperature and wind speed vectors along each sound ray path. The wind and temperature profiles are modelled as the weighted sum of Radial Basis Functions (RBFs), which also allow local meteorological measurements made at the UAV and ground receivers to supplement any acoustic observations. Tomography is used to provide a full 3D reconstruction/visualisation of the observed atmosphere. The technique offers observational mobility under direct user control and the capacity to monitor hazardous atmospheric environments, otherwise not justifiable on the basis of cost or risk. This paper summarises the tomographic technique and reports on the results of simulations and initial field trials. The technique has practical applications for atmospheric research, sound propagation studies, boundary layer meteorology, air pollution measurements, analysis of wind shear, and wind farm surveys.

  7. Gravitation in 3D Spacetime

    NASA Astrophysics Data System (ADS)

    Laubenstein, John; Cockream, Kandi

    2009-05-01

    3D spacetime was developed by the IWPD Scale Metrics (SM) team using a coordinate system that translates n dimensions to n-1. 4-vectors are expressed in 3D along with a scaling factor representing time. Time is not orthogonal to the three spatial dimensions, but rather in alignment with an object's axis-of-motion. We have defined this effect as the object's ``orientation'' (X). The SM orientation (X) is equivalent to the orientation of the 4-velocity vector positioned tangent to its worldline, where X-1=θ+1 and θ is the angle of the 4-vector relative to the axis-of -motion. Both 4-vectors and SM appear to represent valid conceptualizations of the relationship between space and time. Why entertain SM? Scale Metrics gravity is quantized and may suggest a path for the full unification of gravitation with quantum theory. SM has been tested against current observation and is in agreement with the age of the universe, suggests a physical relationship between dark energy and dark matter, is in agreement with the accelerating expansion rate of the universe, contributes to the understanding of the fine-structure constant and provides a physical explanation of relativistic effects.

  8. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing.

  9. 3D medical thermography device

    NASA Astrophysics Data System (ADS)

    Moghadam, Peyman

    2015-05-01

    In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.

  10. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  11. 3D Printable Graphene Composite

    PubMed Central

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C−1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  12. 3D Printable Graphene Composite

    NASA Astrophysics Data System (ADS)

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C-1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  13. Improving Semantic Updating Method on 3d City Models Using Hybrid Semantic-Geometric 3d Segmentation Technique

    NASA Astrophysics Data System (ADS)

    Sharkawi, K.-H.; Abdul-Rahman, A.

    2013-09-01

    Cities and urban areas entities such as building structures are becoming more complex as the modern human civilizations continue to evolve. The ability to plan and manage every territory especially the urban areas is very important to every government in the world. Planning and managing cities and urban areas based on printed maps and 2D data are getting insufficient and inefficient to cope with the complexity of the new developments in big cities. The emergence of 3D city models have boosted the efficiency in analysing and managing urban areas as the 3D data are proven to represent the real world object more accurately. It has since been adopted as the new trend in buildings and urban management and planning applications. Nowadays, many countries around the world have been generating virtual 3D representation of their major cities. The growing interest in improving the usability of 3D city models has resulted in the development of various tools for analysis based on the 3D city models. Today, 3D city models are generated for various purposes such as for tourism, location-based services, disaster management and urban planning. Meanwhile, modelling 3D objects are getting easier with the emergence of the user-friendly tools for 3D modelling available in the market. Generating 3D buildings with high accuracy also has become easier with the availability of airborne Lidar and terrestrial laser scanning equipments. The availability and accessibility to this technology makes it more sensible to analyse buildings in urban areas using 3D data as it accurately represent the real world objects. The Open Geospatial Consortium (OGC) has accepted CityGML specifications as one of the international standards for representing and exchanging spatial data, making it easier to visualize, store and manage 3D city models data efficiently. CityGML able to represents the semantics, geometry, topology and appearance of 3D city models in five well-defined Level-of-Details (LoD), namely LoD0

  14. LOTT RANCH 3D PROJECT

    SciTech Connect

    Larry Lawrence; Bruce Miller

    2004-09-01

    The Lott Ranch 3D seismic prospect located in Garza County, Texas is a project initiated in September of 1991 by the J.M. Huber Corp., a petroleum exploration and production company. By today's standards the 126 square mile project does not seem monumental, however at the time it was conceived it was the most intensive land 3D project ever attempted. Acquisition began in September of 1991 utilizing GEO-SEISMIC, INC., a seismic data contractor. The field parameters were selected by J.M. Huber, and were of a radical design. The recording instruments used were GeoCor IV amplifiers designed by Geosystems Inc., which record the data in signed bit format. It would not have been practical, if not impossible, to have processed the entire raw volume with the tools available at that time. The end result was a dataset that was thought to have little utility due to difficulties in processing the field data. In 1997, Yates Energy Corp. located in Roswell, New Mexico, formed a partnership to further develop the project. Through discussions and meetings with Pinnacle Seismic, it was determined that the original Lott Ranch 3D volume could be vastly improved upon reprocessing. Pinnacle Seismic had shown the viability of improving field-summed signed bit data on smaller 2D and 3D projects. Yates contracted Pinnacle Seismic Ltd. to perform the reprocessing. This project was initiated with high resolution being a priority. Much of the potential resolution was lost through the initial summing of the field data. Modern computers that are now being utilized have tremendous speed and storage capacities that were cost prohibitive when this data was initially processed. Software updates and capabilities offer a variety of quality control and statics resolution, which are pertinent to the Lott Ranch project. The reprocessing effort was very successful. The resulting processed data-set was then interpreted using modern PC-based interpretation and mapping software. Production data, log data

  15. Emergence of 3D Printed Dosage Forms: Opportunities and Challenges.

    PubMed

    Alhnan, Mohamed A; Okwuosa, Tochukwu C; Sadia, Muzna; Wan, Ka-Wai; Ahmed, Waqar; Arafat, Basel

    2016-08-01

    The recent introduction of the first FDA approved 3D-printed drug has fuelled interest in 3D printing technology, which is set to revolutionize healthcare. Since its initial use, this rapid prototyping (RP) technology has evolved to such an extent that it is currently being used in a wide range of applications including in tissue engineering, dentistry, construction, automotive and aerospace. However, in the pharmaceutical industry this technology is still in its infancy and its potential yet to be fully explored. This paper presents various 3D printing technologies such as stereolithographic, powder based, selective laser sintering, fused deposition modelling and semi-solid extrusion 3D printing. It also provides a comprehensive review of previous attempts at using 3D printing technologies on the manufacturing dosage forms with a particular focus on oral tablets. Their advantages particularly with adaptability in the pharmaceutical field have been highlighted, which enables the preparation of dosage forms with complex designs and geometries, multiple actives and tailored release profiles. An insight into the technical challenges facing the different 3D printing technologies such as the formulation and processing parameters is provided. Light is also shed on the different regulatory challenges that need to be overcome for 3D printing to fulfil its real potential in the pharmaceutical industry.

  16. Emergence of 3D Printed Dosage Forms: Opportunities and Challenges.

    PubMed

    Alhnan, Mohamed A; Okwuosa, Tochukwu C; Sadia, Muzna; Wan, Ka-Wai; Ahmed, Waqar; Arafat, Basel

    2016-08-01

    The recent introduction of the first FDA approved 3D-printed drug has fuelled interest in 3D printing technology, which is set to revolutionize healthcare. Since its initial use, this rapid prototyping (RP) technology has evolved to such an extent that it is currently being used in a wide range of applications including in tissue engineering, dentistry, construction, automotive and aerospace. However, in the pharmaceutical industry this technology is still in its infancy and its potential yet to be fully explored. This paper presents various 3D printing technologies such as stereolithographic, powder based, selective laser sintering, fused deposition modelling and semi-solid extrusion 3D printing. It also provides a comprehensive review of previous attempts at using 3D printing technologies on the manufacturing dosage forms with a particular focus on oral tablets. Their advantages particularly with adaptability in the pharmaceutical field have been highlighted, which enables the preparation of dosage forms with complex designs and geometries, multiple actives and tailored release profiles. An insight into the technical challenges facing the different 3D printing technologies such as the formulation and processing parameters is provided. Light is also shed on the different regulatory challenges that need to be overcome for 3D printing to fulfil its real potential in the pharmaceutical industry. PMID:27194002

  17. 3D model of bow shocks

    NASA Astrophysics Data System (ADS)

    Gustafsson, M.; Ravkilde, T.; Kristensen, L. E.; Cabrit, S.; Field, D.; Pineau Des Forêts, G.

    2010-04-01

    Context. Shocks produced by outflows from young stars are often observed as bow-shaped structures in which the H2 line strength and morphology are characteristic of the physical and chemical environments and the velocity of the impact. Aims: We present a 3D model of interstellar bow shocks propagating in a homogeneous molecular medium with a uniform magnetic field. The model enables us to estimate the shock conditions in observed flows. As an example, we show how the model can reproduce rovibrational H2 observations of a bow shock in OMC1. Methods: The 3D model is constructed by associating a planar shock with every point on a 3D bow skeleton. The planar shocks are modelled with a highly sophisticated chemical reaction network that is essential for predicting accurate shock widths and line emissions. The shock conditions vary along the bow surface and determine the shock type, the local thickness, and brightness of the bow shell. The motion of the cooling gas parallel to the bow surface is also considered. The bow shock can move at an arbitrary inclination to the magnetic field and to the observer, and we model the projected morphology and radial velocity distribution in the plane-of-sky. Results: The morphology of a bow shock is highly dependent on the orientation of the magnetic field and the inclination of the flow. Bow shocks can appear in many different guises and do not necessarily show a characteristic bow shape. The ratio of the H2 v = 2-1 S(1) line to the v = 1-0 S(1) line is variable across the flow and the spatial offset between the peaks of the lines may be used to estimate the inclination of the flow. The radial velocity comes to a maximum behind the apparent apex of the bow shock when the flow is seen at an inclination different from face-on. Under certain circumstances the radial velocity of an expanding bow shock can show the same signatures as a rotating flow. In this case a velocity gradient perpendicular to the outflow direction is a projection

  18. Improving automated 3D reconstruction methods via vision metrology

    NASA Astrophysics Data System (ADS)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  19. 3D geophysical inversion for contact surfaces

    NASA Astrophysics Data System (ADS)

    Lelièvre, Peter; Farquharson, Colin

    2014-05-01

    Geologists' interpretations about the Earth typically involve distinct rock units with contacts (interfaces) between them. In contrast, standard minimum-structure volumetric inversions (performed on meshes of space-filling cells) recover smooth models inconsistent with such interpretations. There are several approaches through which geophysical inversion can help recover models with the desired characteristics. Some authors have developed iterative strategies in which several volumetric inversions are performed with regularization parameters changing to achieve sharper interfaces at automatically determined locations. Another approach is to redesign the regularization to be consistent with the desired model characteristics, e.g. L1-like norms or compactness measures. A few researchers have taken approaches that limit the recovered values to lie within particular ranges, resulting in sharp discontinuities; these include binary inversion, level set methods and clustering strategies. In most of the approaches mentioned above, the model parameterization considers the physical properties in each of the many space-filling cells within the volume of interest. The exception are level set methods, in which a higher dimensional function is parameterized and the contact surface is determined from the zero-level of that function. However, even level-set methods rely on an underlying volumetric mesh. We are researching a fundamentally different type of inversion that parameterizes the Earth in terms of the contact surfaces between rock units. 3D geological Earth models typically comprise wireframe surfaces of tessellated triangles or other polygonal planar facets. This wireframe representation allows for flexible and efficient generation of complicated geological structures. Therefore, a natural approach for representing a geophysical model in an inversion is to parameterize the wireframe contact surfaces as the coordinates of the nodes (facet vertices). The geological and

  20. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-01

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction.

  1. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-01

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction. PMID:26861680

  2. 3D Game Content Distributed Adaptation in Heterogeneous Environments

    NASA Astrophysics Data System (ADS)

    Morán, Francisco; Preda, Marius; Lafruit, Gauthier; Villegas, Paulo; Berretty, Robert-Paul

    2007-12-01

    Most current multiplayer 3D games can only be played on a single dedicated platform (a particular computer, console, or cell phone), requiring specifically designed content and communication over a predefined network. Below we show how, by using signal processing techniques such as multiresolution representation and scalable coding for all the components of a 3D graphics object (geometry, texture, and animation), we enable online dynamic content adaptation, and thus delivery of the same content over heterogeneous networks to terminals with very different profiles, and its rendering on them. We present quantitative results demonstrating how the best displayed quality versus computational complexity versus bandwidth tradeoffs have been achieved, given the distributed resources available over the end-to-end content delivery chain. Additionally, we use state-of-the-art, standardised content representation and compression formats (MPEG-4 AFX, JPEG 2000, XML), enabling deployment over existing infrastructure, while keeping hooks to well-established practices in the game industry.

  3. Planning 3-D collision-free paths using spheres

    NASA Technical Reports Server (NTRS)

    Bonner, Susan; Kelley, Robert B.

    1989-01-01

    A scheme for the representation of objects, the Successive Spherical Approximation (SSA), facilitates the rapid planning of collision-free paths in a 3-D, dynamic environment. The hierarchical nature of the SSA allows collision-free paths to be determined efficiently while still providing for the exact representation of dynamic objects. The concept of a freespace cell is introduced to allow human 3-D conceptual knowledge to be used in facilitating satisfying choices for paths. Collisions can be detected at a rate better than 1 second per environment object per path. This speed enables the path planning process to apply a hierarchy of rules to create a heuristically satisfying collision-free path.

  4. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  5. 3D Elastic Wavefield Tomography

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Stekl, I.; Umpleby, A.; Shah, N.

    2010-12-01

    Wavefield tomography, or waveform inversion, aims to extract the maximum information from seismic data by matching trace by trace the response of the solid earth to seismic waves using numerical modelling tools. Its first formulation dates from the early 80's, when Albert Tarantola developed a solid theoretical basis that is still used today with little change. Due to computational limitations, the application of the method to 3D problems has been unaffordable until a few years ago, and then only under the acoustic approximation. Although acoustic wavefield tomography is widely used, a complete solution of the seismic inversion problem requires that we account properly for the physics of wave propagation, and so must include elastic effects. We have developed a 3D tomographic wavefield inversion code that incorporates the full elastic wave equation. The bottle neck of the different implementations is the forward modelling algorithm that generates the synthetic data to be compared with the field seismograms as well as the backpropagation of the residuals needed to form the direction update of the model parameters. Furthermore, one or two extra modelling runs are needed in order to calculate the step-length. Our approach uses a FD scheme explicit time-stepping by finite differences that are 4th order in space and 2nd order in time, which is a 3D version of the one developed by Jean Virieux in 1986. We chose the time domain because an explicit time scheme is much less demanding in terms of memory than its frequency domain analogue, although the discussion of wich domain is more efficient still remains open. We calculate the parameter gradients for Vp and Vs by correlating the normal and shear stress wavefields respectively. A straightforward application would lead to the storage of the wavefield at all grid points at each time-step. We tackled this problem using two different approaches. The first one makes better use of resources for small models of dimension equal

  6. Conducting Polymer 3D Microelectrodes

    PubMed Central

    Sasso, Luigi; Vazquez, Patricia; Vedarethinam, Indumathi; Castillo-León, Jaime; Emnéus, Jenny; Svendsen, Winnie E.

    2010-01-01

    Conducting polymer 3D microelectrodes have been fabricated for possible future neurological applications. A combination of micro-fabrication techniques and chemical polymerization methods has been used to create pillar electrodes in polyaniline and polypyrrole. The thin polymer films obtained showed uniformity and good adhesion to both horizontal and vertical surfaces. Electrodes in combination with metal/conducting polymer materials have been characterized by cyclic voltammetry and the presence of the conducting polymer film has shown to increase the electrochemical activity when compared with electrodes coated with only metal. An electrochemical characterization of gold/polypyrrole electrodes showed exceptional electrochemical behavior and activity. PC12 cells were finally cultured on the investigated materials as a preliminary biocompatibility assessment. These results show that the described electrodes are possibly suitable for future in-vitro neurological measurements. PMID:22163508

  7. ShowMe3D

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less

  8. 3D imaging: how to achieve highest accuracy

    NASA Astrophysics Data System (ADS)

    Luhmann, Thomas

    2011-07-01

    The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are widely known from photogrammetry or computer vision approaches. The objectives of these methods differ, more or less, from high precision and well-structured measurements in (industrial) photogrammetry to fully-automated non-structured applications in computer vision. Accuracy and precision is a critical issue for the 3D measurement of industrial, engineering or medical objects. As state of the art, photogrammetric multi-view measurements achieve relative precisions in the order of 1:100000 to 1:200000, and relative accuracies with respect to retraceable lengths in the order of 1:50000 to 1:100000 of the largest object diameter. In order to obtain these figures a number of influencing parameters have to be optimized. These are, besides others: physical representation of object surface (targets, texture), illumination and light sources, imaging sensors, cameras and lenses, calibration strategies (camera model), orientation strategies (bundle adjustment), image processing of homologue features (target measurement, stereo and multi-image matching), representation of object or workpiece coordinate systems and object scale. The paper discusses the above mentioned parameters and offers strategies for obtaining highest accuracy in object space. Practical examples of high-quality stereo camera measurements and multi-image applications are used to prove the relevance of high accuracy in different applications, ranging from medical navigation to static and dynamic industrial measurements. In addition, standards for accuracy verifications are presented and demonstrated by practical examples

  9. The 3D Elevation Program: summary for Alaska

    USGS Publications Warehouse

    Carswell, William J.

    2013-01-01

    Coordination by SDMI and AMEC avoids duplication of effort and ensures a unified approach to consistent, statewide data acquisition; the enhancement of existing data; and support for emerging applications. The 3D Elevation Program (3DEP) initiative, managed by the U.S. Geological Survey (USGS), responds to the growing need for high-quality topographic data and a wide range of other three-dimensional representations of the Nation’s natural and constructed features.

  10. Ultrasonic impact damage assessment in 3D woven composite materials

    NASA Astrophysics Data System (ADS)

    Mannai, E.; Lamboul, B.; Roche, J. M.

    2015-03-01

    An ultrasonic nondestructive methodology is proposed for the assessment of low velocity impact damage in a 3D woven composite material. The output data is intended for material scientists and numerical scientists to validate the damage tolerance performance of the manufactured materials and the reliability of damage modeling predictions. A depth-dependent threshold based on the reflectivity of flat bottom holes is applied to the ultrasonic data to remove the structural noise and isolate echoes of interest. The methodology was applied to a 3 mm thick 3D woven composite plate impacted with different energies. An artificial 3D representation of the detected echoes is proposed to enhance the spatial perception of the generated damage by the end user. The paper finally highlights some statistics made on the detected echoes to quantitatively assess the impact damage resistance of the tested specimens.

  11. 3D Printing and Digital Rock Physics for Geomaterials

    NASA Astrophysics Data System (ADS)

    Martinez, M. J.; Yoon, H.; Dewers, T. A.

    2015-12-01

    Imaging techniques for the analysis of porous structures have revolutionized our ability to quantitatively characterize geomaterials. Digital representations of rock from CT images and physics modeling based on these pore structures provide the opportunity to further advance our quantitative understanding of fluid flow, geomechanics, and geochemistry, and the emergence of coupled behaviors. Additive manufacturing, commonly known as 3D printing, has revolutionized production of custom parts with complex internal geometries. For the geosciences, recent advances in 3D printing technology may be co-opted to print reproducible porous structures derived from CT-imaging of actual rocks for experimental testing. The use of 3D printed microstructure allows us to surmount typical problems associated with sample-to-sample heterogeneity that plague rock physics testing and to test material response independent from pore-structure variability. Together, imaging, digital rocks and 3D printing potentially enables a new workflow for understanding coupled geophysical processes in a real, but well-defined setting circumventing typical issues associated with reproducibility, enabling full characterization and thus connection of physical phenomena to structure. In this talk we will discuss the possibilities that these technologies can bring to geosciences and present early experiences with coupled multiscale experimental and numerical analysis using 3D printed fractured rock specimens. In particular, we discuss the processes of selection and printing of transparent fractured specimens based on 3D reconstruction of micro-fractured rock to study fluid flow characterization and manipulation. Micro-particle image velocimetry is used to directly visualize 3D single and multiphase flow velocity in 3D fracture networks. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U

  12. 3D holographic portraits: presence and absence

    NASA Astrophysics Data System (ADS)

    Oliveria, Rosa M.; Bernardo, Luís Miguel

    2011-02-01

    Authors writing about the portrait insist on the status of extending the model image portrayed beyond the absence and even death. The portrait also has this ability and suggests immortality. The picture suspends the time, making the absent present. The portrait has been, over time, one of the themes mostly used in art. No wonder that in holography it is an important subject as well. The face is a body area of privileged communication and expression. It expresses emotions through looks, smiles, movements and expressions. Being Holography, so far, the recording technology that represents the object most similar to the original, with the same parallax, we may fall into a mimetic representation of reality. On Art Holography even by following paths already traversed, the resulting holograms are always different because of the unique concept that each artist-holographer puts into his work. As with any other artistic technology, each artist uses the medium differently and with different results.

  13. Mapping the human cerebral cortex using 3-D medial manifolds

    NASA Astrophysics Data System (ADS)

    Szekely, Gabor; Brechbuehler, Christian; Kuebler, Olaf; Ogniewicz, Robert; Budinger, Thomas F.

    1992-09-01

    Novel imaging technologies provide a detailed look at structure and function of the tremendously complex and variable human brain. Optimal exploitation of the information stored in the rapidly growing collection of acquired and segmented MRI data calls for robust and reliable descriptions of the individual geometry of the cerebral cortex. A mathematical description and representation of 3-D shape, capable of dealing with form of variable appearance, is at the focus of this paper. We base our development on the Medial Axis Transformation (MAT) customarily defined in 2-D although the concept generalizes to any number of dimensions. Our implementation of the 3-D MAT combines full 3-D Voronoitesselation generated by the set of all border points with regularization procedures to obtain geometrically and topologically correct medial manifolds. The proposed algorithm was tested on synthetic objects and has been applied to 3-D MRI data of 1 mm isotropic resolution to obtain a description of the sulci in the cerebral cortex. Description and representation of the cortical anatomy is significant in clinical applications, medical research, and instrumentation developments.

  14. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  15. Novel 3D Compression Methods for Geometry, Connectivity and Texture

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2016-06-01

    A large number of applications in medical visualization, games, engineering design, entertainment, heritage, e-commerce and so on require the transmission of 3D models over the Internet or over local networks. 3D data compression is an important requirement for fast data storage, access and transmission within bandwidth limitations. The Wavefront OBJ (object) file format is commonly used to share models due to its clear simple design. Normally each OBJ file contains a large amount of data (e.g. vertices and triangulated faces, normals, texture coordinates and other parameters) describing the mesh surface. In this paper we introduce a new method to compress geometry, connectivity and texture coordinates by a novel Geometry Minimization Algorithm (GM-Algorithm) in connection with arithmetic coding. First, each vertex ( x, y, z) coordinates are encoded to a single value by the GM-Algorithm. Second, triangle faces are encoded by computing the differences between two adjacent vertex locations, which are compressed by arithmetic coding together with texture coordinates. We demonstrate the method on large data sets achieving compression ratios between 87 and 99 % without reduction in the number of reconstructed vertices and triangle faces. The decompression step is based on a Parallel Fast Matching Search Algorithm (Parallel-FMS) to recover the structure of the 3D mesh. A comparative analysis of compression ratios is provided with a number of commonly used 3D file formats such as VRML, OpenCTM and STL highlighting the performance and effectiveness of the proposed method.

  16. Real-Time 3D Visualization

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  17. 3D View of Mars Particle

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This is a 3D representation of the pits seen in the first Atomic Force Microscope, or AFM, images sent back from NASA's Phoenix Mars Lander. Red represents the highest point and purple represents the lowest point.

    The particle in the upper left corner shown at the highest magnification ever seen from another world is a rounded particle about one micrometer, or one millionth of a meter, across. It is a particle of the dust that cloaks Mars. Such dust particles color the Martian sky pink, feed storms that regularly envelop the planet and produce Mars' distinctive red soil.

    The particle was part of a sample informally called 'Sorceress' delivered to the AFM on the 38th Martian day, or sol, of the mission (July 2, 2008). The AFM is part of Phoenix's microscopic station called MECA, or the Microscopy, Electrochemistry, and Conductivity Analyzer.

    The AFM was developed by a Swiss-led consortium, with Imperial College London producing the silicon substrate that holds sampled particles.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  18. 3D multiplexed immunoplasmonics microscopy.

    PubMed

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-21

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K(+) channel subunit KV1.1) on human cancer CD44(+) EGFR(+) KV1.1(+) MDA-MB-231 cells and reference CD44(-) EGFR(-) KV1.1(+) 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third

  19. NIF Ignition Target 3D Point Design

    SciTech Connect

    Jones, O; Marinak, M; Milovich, J; Callahan, D

    2008-11-05

    We have developed an input file for running 3D NIF hohlraums that is optimized such that it can be run in 1-2 days on parallel computers. We have incorporated increasing levels of automation into the 3D input file: (1) Configuration controlled input files; (2) Common file for 2D and 3D, different types of capsules (symcap, etc.); and (3) Can obtain target dimensions, laser pulse, and diagnostics settings automatically from NIF Campaign Management Tool. Using 3D Hydra calculations to investigate different problems: (1) Intrinsic 3D asymmetry; (2) Tolerance to nonideal 3D effects (e.g. laser power balance, pointing errors); and (3) Synthetic diagnostics.

  20. 3D moviemap and a 3D panorama

    NASA Astrophysics Data System (ADS)

    Naimark, Michael

    1997-05-01

    Two immersive virtual environments produced as art installations investigate 'sense of place' in different but complimentary ways. One is a stereoscopic moviemap, the other a stereoscopic panorama. Moviemaps are interactive systems which allow 'travel' along pre-recorded routes with some control over speed and direction. Panoramas are 360 degree visual representations dating back to the late 18th century but which have recently experienced renewed interest due to 'virtual reality' systems. Moviemaps allow 'moving around' while panoramas allow 'looking around,' but to date there has been little or no attempt to produce either in stereo from camera-based material. 'See Banff stereoscopic moviemap about landscape, tourism, and growth in the Canadian Rocky Mountains. It was filmed with twin 16 mm cameras and displayed as a single-user experience housed in a cabinet resembling a century-old kinetoscope, with a crank on the side for 'moving through' the material. 'Be Now Here (Welcome to the Neighborhood)' (1995-6) is a stereoscopic panorama filmed in public gathering places around the world, based upon the UNESCO World Heritage 'In Danger' list. It was filmed with twin 35 mm motion picture cameras on a rotating tripod and displayed using a synchronized rotating floor.

  1. 3D Kitaev spin liquids

    NASA Astrophysics Data System (ADS)

    Hermanns, Maria

    The Kitaev honeycomb model has become one of the archetypal spin models exhibiting topological phases of matter, where the magnetic moments fractionalize into Majorana fermions interacting with a Z2 gauge field. In this talk, we discuss generalizations of this model to three-dimensional lattice structures. Our main focus is the metallic state that the emergent Majorana fermions form. In particular, we discuss the relation of the nature of this Majorana metal to the details of the underlying lattice structure. Besides (almost) conventional metals with a Majorana Fermi surface, one also finds various realizations of Dirac semi-metals, where the gapless modes form Fermi lines or even Weyl nodes. We introduce a general classification of these gapless quantum spin liquids using projective symmetry analysis. Furthermore, we briefly outline why these Majorana metals in 3D Kitaev systems provide an even richer variety of Dirac and Weyl phases than possible for electronic matter and comment on possible experimental signatures. Work done in collaboration with Kevin O'Brien and Simon Trebst.

  2. Locomotive wheel 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Guan, Xin; Luo, Zhisheng; Gao, Xiaorong; Wu, Jianle

    2010-08-01

    In the article, a system, which is used to reconstruct locomotive wheels, is described, helping workers detect the condition of a wheel through a direct view. The system consists of a line laser, a 2D camera, and a computer. We use 2D camera to capture the line-laser light reflected by the object, a wheel, and then compute the final coordinates of the structured light. Finally, using Matlab programming language, we transform the coordinate of points to a smooth surface and illustrate the 3D view of the wheel. The article also proposes the system structure, processing steps and methods, and sets up an experimental platform to verify the design proposal. We verify the feasibility of the whole process, and analyze the results comparing to standard date. The test results show that this system can work well, and has a high accuracy on the reconstruction. And because there is still no such application working in railway industries, so that it has practical value in railway inspection system.

  3. 3D ultrafast laser scanner

    NASA Astrophysics Data System (ADS)

    Mahjoubfar, A.; Goda, K.; Wang, C.; Fard, A.; Adam, J.; Gossett, D. R.; Ayazi, A.; Sollier, E.; Malik, O.; Chen, E.; Liu, Y.; Brown, R.; Sarkhosh, N.; Di Carlo, D.; Jalali, B.

    2013-03-01

    Laser scanners are essential for scientific research, manufacturing, defense, and medical practice. Unfortunately, often times the speed of conventional laser scanners (e.g., galvanometric mirrors and acousto-optic deflectors) falls short for many applications, resulting in motion blur and failure to capture fast transient information. Here, we present a novel type of laser scanner that offers roughly three orders of magnitude higher scan rates than conventional methods. Our laser scanner, which we refer to as the hybrid dispersion laser scanner, performs inertia-free laser scanning by dispersing a train of broadband pulses both temporally and spatially. More specifically, each broadband pulse is temporally processed by time stretch dispersive Fourier transform and further dispersed into space by one or more diffractive elements such as prisms and gratings. As a proof-of-principle demonstration, we perform 1D line scans at a record high scan rate of 91 MHz and 2D raster scans and 3D volumetric scans at an unprecedented scan rate of 105 kHz. The method holds promise for a broad range of scientific, industrial, and biomedical applications. To show the utility of our method, we demonstrate imaging, nanometer-resolved surface vibrometry, and high-precision flow cytometry with real-time throughput that conventional laser scanners cannot offer due to their low scan rates.

  4. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  5. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  6. Trapezoidal phase-shifting method for 3D shape measurement

    NASA Astrophysics Data System (ADS)

    Huang, Peisen S.; Zhang, Song; Chiang, Fu-Pen

    2004-12-01

    We propose a novel structured light method, namely trapezoidal phase-shifting method, for 3-D shape measurement. This method uses three patterns coded with phase-shifted, trapezoidal-shaped gray levels. The 3-D information of the object is extracted by direct calculation of an intensity ratio. Theoretical analysis showed that this new method was significantly less sensitive to the defocusing effect of the captured images when compared to the traditional intensity-ratio based methods. This important advantage makes large-depth 3-D shape measurement possible. If compared to the sinusoidal phase-shifting method, the resolution is similar, but the processing speed is at least 4.5 times faster. The feasibility of this method was demonstrated in a previously developed real-time 3-D shape measurement system. The reconstructed 3-D results showed similar quality as those obtained by the sinusoidal phase-shifting method. However, since the processing speed was much faster, we were able to not only acquire the images in real time, but also reconstruct the 3-D shapes in real time (40 fps at a resolution of 532 x 500 pixels). This real-time capability allows us to measure dynamically changing objects, such as human faces. The potential applications of this new method include industrial inspection, reverse engineering, robotic vision, computer graphics, medical diagnosis, etc.

  7. Hough transform-based 3D mesh retrieval

    NASA Astrophysics Data System (ADS)

    Zaharia, Titus; Preteux, Francoise J.

    2001-11-01

    This papre addresses the issue of 3D mesh indexation by using shape descriptors (SDs) under constraints of geometric and topological invariance. A new shape descriptor, the Optimized 3D Hough Transform Descriptor (O3HTD) is here proposed. Intrinsically topologically stable, the O3DHTD is not invariant to geometric transformations. Nevertheless, we show mathematically how the O3DHTD can be optimally associated (in terms of compactness of representation and computational complexity) with a spatial alignment procedure which leads to a geometric invariant behavior. Experimental results have been carried out upon the MPEG-7 3D model database consisting of about 1300 meshes in VRML 2.0 format. Objective retrieval results, based upon the definition of a categorized ground truth subset, are reported in terms of Bull Eye Percentage (BEP) score and compared to those obtained by applying the MPEg-7 3D SD. It is shown that the O3DHTD outperforms the MPEg-7 3D SD of up to 28%.

  8. Forward ramp in 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Mars Pathfinder's forward rover ramp can be seen successfully unfurled in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This ramp was not used for the deployment of the microrover Sojourner, which occurred at the end of Sol 2. When this image was taken, Sojourner was still latched to one of the lander's petals, waiting for the command sequence that would execute its descent off of the lander's petal.

    The image helped Pathfinder scientists determine whether to deploy the rover using the forward or backward ramps and the nature of the first rover traverse. The metallic object at the lower left of the image is the lander's low-gain antenna. The square at the end of the ramp is one of the spacecraft's magnetic targets. Dust that accumulates on the magnetic targets will later be examined by Sojourner's Alpha Proton X-Ray Spectrometer instrument for chemical analysis. At right, a lander petal is visible.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  9. 3D grain boundary migration

    NASA Astrophysics Data System (ADS)

    Becker, J. K.; Bons, P. D.

    2009-04-01

    Microstructures of rocks play an important role in determining rheological properties and help to reveal the processes that lead to their formation. Some of these processes change the microstructure significantly and may thus have the opposite effect in obliterating any fabrics indicative of the previous history of the rocks. One of these processes is grain boundary migration (GBM). During static recrystallisation, GBM may produce a foam texture that completely overprints a pre-existing grain boundary network and GBM actively influences the rheology of a rock, via its influence on grain size and lattice defect concentration. We here present a new numerical simulation software that is capable of simulating a whole range of processes on the grain scale (it is not limited to grain boundary migration). The software is polyhedron-based, meaning that each grain (or phase) is represented by a polyhedron that has discrete boundaries. The boundary (the shell) of the polyhedron is defined by a set of facets which in turn is defined by a set of vertices. Each structural entity (polyhedron, facets and vertices) can have an unlimited number of parameters (depending on the process to be modeled) such as surface energy, concentration, etc. which can be used to calculate changes of the microstructre. We use the processes of grain boundary migration of a "regular" and a partially molten rock to demonstrate the software. Since this software is 3D, the formation of melt networks in a partially molten rock can also be studied. The interconnected melt network is of fundamental importance for melt segregation and migration in the crust and mantle and can help to understand the core-mantle differentiation of large terrestrial planets.

  10. 3-D Adaptive Sparsity Based Image Compression with Applications to Optical Coherence Tomography

    PubMed Central

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A.; Farsiu, Sina

    2015-01-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  11. 3D Printing and Its Urologic Applications

    PubMed Central

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology. PMID:26028997

  12. Imaging a Sustainable Future in 3D

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  13. 3D Printing and Its Urologic Applications.

    PubMed

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology.

  14. Beowulf 3D: a case study

    NASA Astrophysics Data System (ADS)

    Engle, Rob

    2008-02-01

    This paper discusses the creative and technical challenges encountered during the production of "Beowulf 3D," director Robert Zemeckis' adaptation of the Old English epic poem and the first film to be simultaneously released in IMAX 3D and digital 3D formats.

  15. Teaching Geography with 3-D Visualization Technology

    ERIC Educational Resources Information Center

    Anthamatten, Peter; Ziegler, Susy S.

    2006-01-01

    Technology that helps students view images in three dimensions (3-D) can support a broad range of learning styles. "Geo-Wall systems" are visualization tools that allow scientists, teachers, and students to project stereographic images and view them in 3-D. We developed and presented 3-D visualization exercises in several undergraduate courses.…

  16. Expanding Geometry Understanding with 3D Printing

    ERIC Educational Resources Information Center

    Cochran, Jill A.; Cochran, Zane; Laney, Kendra; Dean, Mandi

    2016-01-01

    With the rise of personal desktop 3D printing, a wide spectrum of educational opportunities has become available for educators to leverage this technology in their classrooms. Until recently, the ability to create physical 3D models was well beyond the scope, skill, and budget of many schools. However, since desktop 3D printers have become readily…

  17. 3D Elastic Seismic Wave Propagation Code

    1998-09-23

    E3D is capable of simulating seismic wave propagation in a 3D heterogeneous earth. Seismic waves are initiated by earthquake, explosive, and/or other sources. These waves propagate through a 3D geologic model, and are simulated as synthetic seismograms or other graphical output.

  18. 3D Flow Visualization Using Texture Advection

    NASA Technical Reports Server (NTRS)

    Kao, David; Zhang, Bing; Kim, Kwansik; Pang, Alex; Moran, Pat (Technical Monitor)

    2001-01-01

    Texture advection is an effective tool for animating and investigating 2D flows. In this paper, we discuss how this technique can be extended to 3D flows. In particular, we examine the use of 3D and 4D textures on 3D synthetic and computational fluid dynamics flow fields.

  19. Extension of RCC Topological Relations for 3d Complex Objects Components Extracted from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Xing, Xu-Feng; Abolfazl Mostafavia, Mir; Wang, Chen

    2016-06-01

    Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in R3. Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in R3. We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.

  20. 3-D Perspective Pasadena, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western part of the city of Pasadena, California, looking north towards the San Gabriel Mountains. Portions of the cities of Altadena and La Canada, Flintridge are also shown. The image was created from three datasets: the Shuttle Radar Topography Mission (SRTM) supplied the elevation data; Landsat data from November 11, 1986 provided the land surface color (not the sky) and U.S. Geological Survey digital aerial photography provides the image detail. The Rose Bowl, surrounded by a golf course, is the circular feature at the bottom center of the image. The Jet Propulsion Laboratory is the cluster of large buildings north of the Rose Bowl at the base of the mountains. A large landfill, Scholl Canyon, is the smooth area in the lower left corner of the scene. This image shows the power of combining data from different sources to create planning tools to study problems that affect large urban areas. In addition to the well-known earthquake hazards, Southern California is affected by a natural cycle of fire and mudflows. Wildfires strip the mountains of vegetation, increasing the hazards from flooding and mudflows for several years afterwards. Data such as shown on this image can be used to predict both how wildfires will spread over the terrain and also how mudflows will be channeled down the canyons. The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  1. Developing 3D SEM in a broad biological context

    PubMed Central

    Kremer, A; Lippens, S; Bartunkova, S; Asselbergh, B; Blanpain, C; Fendrych, M; Goossens, A; Holt, M; Janssens, S; Krols, M; Larsimont, J-C; Mc Guire, C; Nowack, MK; Saelens, X; Schertel, A; Schepens, B; Slezak, M; Timmerman, V; Theunis, C; Van Brempt, R; Visser, Y; GuÉRin, CJ

    2015-01-01

    When electron microscopy (EM) was introduced in the 1930s it gave scientists their first look into the nanoworld of cells. Over the last 80 years EM has vastly increased our understanding of the complex cellular structures that underlie the diverse functions that cells need to maintain life. One drawback that has been difficult to overcome was the inherent lack of volume information, mainly due to the limit on the thickness of sections that could be viewed in a transmission electron microscope (TEM). For many years scientists struggled to achieve three-dimensional (3D) EM using serial section reconstructions, TEM tomography, and scanning EM (SEM) techniques such as freeze-fracture. Although each technique yielded some special information, they required a significant amount of time and specialist expertise to obtain even a very small 3D EM dataset. Almost 20 years ago scientists began to exploit SEMs to image blocks of embedded tissues and perform serial sectioning of these tissues inside the SEM chamber. Using first focused ion beams (FIB) and subsequently robotic ultramicrotomes (serial block-face, SBF-SEM) microscopists were able to collect large volumes of 3D EM information at resolutions that could address many important biological questions, and do so in an efficient manner. We present here some examples of 3D EM taken from the many diverse specimens that have been imaged in our core facility. We propose that the next major step forward will be to efficiently correlate functional information obtained using light microscopy (LM) with 3D EM datasets to more completely investigate the important links between cell structures and their functions. Lay Description Life happens in three dimensions. For many years, first light, and then EM struggled to image the smallest parts of cells in 3D. With recent advances in technology and corresponding improvements in computing, scientists can now see the 3D world of the cell at the nanoscale. In this paper we present the

  2. 3D toroidal physics: testing the boundaries of symmetry breaking

    NASA Astrophysics Data System (ADS)

    Spong, Don

    2014-10-01

    Toroidal symmetry is an important concept for plasma confinement; it allows the existence of nested flux surface MHD equilibria and conserved invariants for particle motion. However, perfect symmetry is unachievable in realistic toroidal plasma devices. For example, tokamaks have toroidal ripple due to discrete field coils, optimized stellarators do not achieve exact quasi-symmetry, the plasma itself continually seeks lower energy states through helical 3D deformations, and reactors will likely have non-uniform distributions of ferritic steel near the plasma. Also, some level of designed-in 3D magnetic field structure is now anticipated for most concepts in order to lead to a stable, steady-state fusion reactor. Such planned 3D field structures can take many forms, ranging from tokamaks with weak 3D ELM-suppression fields to stellarators with more dominant 3D field structures. There is considerable interest in the development of unified physics models for the full range of 3D effects. Ultimately, the questions of how much symmetry breaking can be tolerated and how to optimize its design must be addressed for all fusion concepts. Fortunately, significant progress is underway in theory, computation and plasma diagnostics on many issues such as magnetic surface quality, plasma screening vs. amplification of 3D perturbations, 3D transport, influence on edge pedestal structures, MHD stability effects, modification of fast ion-driven instabilities, prediction of energetic particle heat loads on plasma-facing materials, effects of 3D fields on turbulence, and magnetic coil design. A closely coupled program of simulation, experimental validation, and design optimization is required to determine what forms and amplitudes of 3D shaping and symmetry breaking will be compatible with future fusion reactors. The development of models to address 3D physics and progress in these areas will be described. This work is supported both by the US Department of Energy under Contract DE

  3. Understanding Human Perception of Building Categories in Virtual 3d Cities - a User Study

    NASA Astrophysics Data System (ADS)

    Tutzauer, P.; Becker, S.; Niese, T.; Deussen, O.; Fritsch, D.

    2016-06-01

    Virtual 3D cities are becoming increasingly important as a means of visually communicating diverse urban-related information. To get a deeper understanding of a human's cognitive experience of virtual 3D cities, this paper presents a user study on the human ability to perceive building categories (e.g. residential home, office building, building with shops etc.) from geometric 3D building representations. The study reveals various dependencies between geometric properties of the 3D representations and the perceptibility of the building categories. Knowledge about which geometries are relevant, helpful or obstructive for perceiving a specific building category is derived. The importance and usability of such knowledge is demonstrated based on a perception-guided 3D building abstraction process.

  4. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  5. RELAP5-3D User Problems

    SciTech Connect

    Riemke, Richard Allan

    2002-09-01

    The Reactor Excursion and Leak Analysis Program with 3D capability1 (RELAP5-3D) is a reactor system analysis code that has been developed at the Idaho National Engineering and Environmental Laboratory (INEEL) for the U. S. Department of Energy (DOE). The 3D capability in RELAP5-3D includes 3D hydrodynamics2 and 3D neutron kinetics3,4. Assessment, verification, and validation of the 3D capability in RELAP5-3D is discussed in the literature5,6,7,8,9,10. Additional assessment, verification, and validation of the 3D capability of RELAP5-3D will be presented in other papers in this users seminar. As with any software, user problems occur. User problems usually fall into the categories of input processing failure, code execution failure, restart/renodalization failure, unphysical result, and installation. This presentation will discuss some of the more generic user problems that have been reported on RELAP5-3D as well as their resolution.

  6. 3D laptop for defense applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  7. Customised 3D Printing: An Innovative Training Tool for the Next Generation of Orbital Surgeons.

    PubMed

    Scawn, Richard L; Foster, Alex; Lee, Bradford W; Kikkawa, Don O; Korn, Bobby S

    2015-01-01

    Additive manufacturing or 3D printing is the process by which three dimensional data fields are translated into real-life physical representations. 3D printers create physical printouts using heated plastics in a layered fashion resulting in a three-dimensional object. We present a technique for creating customised, inexpensive 3D orbit models for use in orbital surgical training using 3D printing technology. These models allow trainee surgeons to perform 'wet-lab' orbital decompressions and simulate upcoming surgeries on orbital models that replicate a patient's bony anatomy. We believe this represents an innovative training tool for the next generation of orbital surgeons.

  8. FlexyDos3D: a deformable anthropomorphic 3D radiation dosimeter: radiation properties

    NASA Astrophysics Data System (ADS)

    De Deene, Y.; Skyt, P. S.; Hil, R.; Booth, J. T.

    2015-02-01

    Three dimensional radiation dosimetry has received growing interest with the implementation of highly conformal radiotherapy treatments. The radiotherapy community faces new challenges with the commissioning of image guided and image gated radiotherapy treatments (IGRT) and deformable image registration software. A new three dimensional anthropomorphically shaped flexible dosimeter, further called ‘FlexyDos3D’, has been constructed and a new fast optical scanning method has been implemented that enables scanning of irregular shaped dosimeters. The FlexyDos3D phantom can be actuated and deformed during the actual treatment. FlexyDos3D offers the additional advantage that it is easy to fabricate, is non-toxic and can be molded in an arbitrary shape with high geometrical precision. The dosimeter formulation has been optimized in terms of dose sensitivity. The influence of the casting material and oxygen concentration has also been investigated. The radiophysical properties of this new dosimeter are discussed including stability, spatial integrity, temperature dependence of the dosimeter during radiation, readout and storage, dose rate dependence and tissue equivalence. The first authors Y De Deene and P S Skyt made an equivalent contribution to the experimental work presented in this paper.

  9. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  10. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-07-25

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  11. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    PubMed Central

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  12. Optical fabrication of lightweighted 3D printed mirrors

    NASA Astrophysics Data System (ADS)

    Herzog, Harrison; Segal, Jacob; Smith, Jeremy; Bates, Richard; Calis, Jacob; De La Torre, Alyssa; Kim, Dae Wook; Mici, Joni; Mireles, Jorge; Stubbs, David M.; Wicker, Ryan

    2015-09-01

    Direct Metal Laser Sintering (DMLS) and Electron Beam Melting (EBM) 3D printing technologies were utilized to create lightweight, optical grade mirrors out of AlSi10Mg aluminum and Ti6Al4V titanium alloys at the University of Arizona in Tucson. The mirror prototypes were polished to meet the λ/20 RMS and λ/4 P-V surface figure requirements. The intent of this project was to design topologically optimized mirrors that had a high specific stiffness and low surface displacement. Two models were designed using Altair Inspire software, and the mirrors had to endure the polishing process with the necessary stiffness to eliminate print-through. Mitigating porosity of the 3D printed mirror blanks was a challenge in the face of reconciling new printing technologies with traditional optical polishing methods. The prototypes underwent Hot Isostatic Press (HIP) and heat treatment to improve density, eliminate porosity, and relieve internal stresses. Metal 3D printing allows for nearly unlimited topological constraints on design and virtually eliminates the need for a machine shop when creating an optical quality mirror. This research can lead to an increase in mirror mounting support complexity in the manufacturing of lightweight mirrors and improve overall process efficiency. The project aspired to have many future applications of light weighted 3D printed mirrors, such as spaceflight. This paper covers the design/fab/polish/test of 3D printed mirrors, thermal/structural finite element analysis, and results.

  13. Optimizing 3D image quality and performance for stereoscopic gaming

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

    2009-02-01

    The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

  14. 3-D Technology Approaches for Biological Ecologies

    NASA Astrophysics Data System (ADS)

    Liu, Liyu; Austin, Robert; U. S-China Physical-Oncology Sciences Alliance (PS-OA) Team

    Constructing three dimensional (3-D) landscapes is an inevitable issue in deep study of biological ecologies, because in whatever scales in nature, all of the ecosystems are composed by complex 3-D environments and biological behaviors. Just imagine if a 3-D technology could help complex ecosystems be built easily and mimic in vivo microenvironment realistically with flexible environmental controls, it will be a fantastic and powerful thrust to assist researchers for explorations. For years, we have been utilizing and developing different technologies for constructing 3-D micro landscapes for biophysics studies in in vitro. Here, I will review our past efforts, including probing cancer cell invasiveness with 3-D silicon based Tepuis, constructing 3-D microenvironment for cell invasion and metastasis through polydimethylsiloxane (PDMS) soft lithography, as well as explorations of optimized stenting positions for coronary bifurcation disease with 3-D wax printing and the latest home designed 3-D bio-printer. Although 3-D technologies is currently considered not mature enough for arbitrary 3-D micro-ecological models with easy design and fabrication, I hope through my talk, the audiences will be able to sense its significance and predictable breakthroughs in the near future. This work was supported by the State Key Development Program for Basic Research of China (Grant No. 2013CB837200), the National Natural Science Foundation of China (Grant No. 11474345) and the Beijing Natural Science Foundation (Grant No. 7154221).

  15. Automatic 3D video format detection

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Wang, Zhe; Zhai, Jiefu; Doyen, Didier

    2011-03-01

    Many 3D formats exist and will probably co-exist for a long time even if 3D standards are today under definition. The support for multiple 3D formats will be important for bringing 3D into home. In this paper, we propose a novel and effective method to detect whether a video is a 3D video or not, and to further identify the exact 3D format. First, we present how to detect those 3D formats that encode a pair of stereo images into a single image. The proposed method detects features and establishes correspondences between features in the left and right view images, and applies the statistics from the distribution of the positional differences between corresponding features to detect the existence of a 3D format and to identify the format. Second, we present how to detect the frame sequential 3D format. In the frame sequential 3D format, the feature points are oscillating from frame to frame. Similarly, the proposed method tracks feature points over consecutive frames, computes the positional differences between features, and makes a detection decision based on whether the features are oscillating. Experiments show the effectiveness of our method.

  16. 2D/3D image (facial) comparison using camera matching.

    PubMed

    Goos, Mirelle I M; Alberink, Ivo B; Ruifrok, Arnout C C

    2006-11-10

    A problem in forensic facial comparison of images of perpetrators and suspects is that distances between fixed anatomical points in the face, which form a good starting point for objective, anthropometric comparison, vary strongly according to the position and orientation of the camera. In case of a cooperating suspect, a 3D image may be taken using e.g. a laser scanning device. By projecting the 3D image onto a 2D image with the suspect's head in the same pose as that of the perpetrator, using the same focal length and pixel aspect ratio, numerical comparison of (ratios of) distances between fixed points becomes feasible. An experiment was performed in which, starting from two 3D scans and one 2D image of two colleagues, male and female, and using seven fixed anatomical locations in the face, comparisons were made for the matching and non-matching case. Using this method, the non-matching pair cannot be distinguished from the matching pair of faces. Facial expression and resolution of images were all more or less optimal, and the results of the study are not encouraging for the use of anthropometric arguments in the identification process. More research needs to be done though on larger sets of facial comparisons. PMID:16337353

  17. RT3D tutorials for GMS users

    SciTech Connect

    Clement, T.P.; Jones, N.L.

    1998-02-01

    RT3D (Reactive Transport in 3-Dimensions) is a computer code that solves coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in a three dimensional saturated porous media. RT3D was developed from the single-species transport code, MT3D (DoD-1.5, 1997 version). As with MT3D, RT3D also uses the USGS groundwater flow model MODFLOW for computing spatial and temporal variations in groundwater head distribution. This report presents a set of tutorial problems that are designed to illustrate how RT3D simulations can be performed within the Department of Defense Groundwater Modeling System (GMS). GMS serves as a pre- and post-processing interface for RT3D. GMS can be used to define all the input files needed by RT3D code, and later the code can be launched from within GMS and run as a separate application. Once the RT3D simulation is completed, the solution can be imported to GMS for graphical post-processing. RT3D v1.0 supports several reaction packages that can be used for simulating different types of reactive contaminants. Each of the tutorials, described below, provides training on a different RT3D reaction package. Each reaction package has different input requirements, and the tutorials are designed to describe these differences. Furthermore, the tutorials illustrate the various options available in GMS for graphical post-processing of RT3D results. Users are strongly encouraged to complete the tutorials before attempting to use RT3D and GMS on a routine basis.

  18. Potential of 3D City Models to assess flood vulnerability

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Bochow, Mathias; Schüttig, Martin; Nagel, Claus; Ross, Lutz; Kreibich, Heidi

    2016-04-01

    Vulnerability, as the product of exposure and susceptibility, is a key factor of the flood risk equation. Furthermore, the estimation of flood loss is very sensitive to the choice of the vulnerability model. Still, in contrast to elaborate hazard simulations, vulnerability is often considered in a simplified manner concerning the spatial resolution and geo-location of exposed objects as well as the susceptibility of these objects at risk. Usually, area specific potential flood loss is quantified on the level of aggregated land-use classes, and both hazard intensity and resistance characteristics of affected objects are represented in highly simplified terms. We investigate the potential of 3D City Models and spatial features derived from remote sensing data to improve the differentiation of vulnerability in flood risk assessment. 3D City Models are based on CityGML, an application scheme of the Geography Markup Language (GML), which represents the 3D geometry, 3D topology, semantics and appearance of objects on different levels of detail. As such, 3D City Models offer detailed spatial information which is useful to describe the exposure and to characterize the susceptibility of residential buildings at risk. This information is further consolidated with spatial features of the building stock derived from remote sensing data. Using this database a spatially detailed flood vulnerability model is developed by means of data-mining. Empirical flood damage data are used to derive and to validate flood susceptibility models for individual objects. We present first results from a prototype application in the city of Dresden, Germany. The vulnerability modeling based on 3D City Models and remote sensing data is compared i) to the generally accepted good engineering practice based on area specific loss potential and ii) to a highly detailed representation of flood vulnerability based on a building typology using urban structure types. Comparisons are drawn in terms of

  19. Creating Realistic 3D Graphics with Excel at High School--Vector Algebra in Practice

    ERIC Educational Resources Information Center

    Benacka, Jan

    2015-01-01

    The article presents the results of an experiment in which Excel applications that depict rotatable and sizable orthographic projection of simple 3D figures with face overlapping were developed with thirty gymnasium (high school) students of age 17-19 as an introduction to 3D computer graphics. A questionnaire survey was conducted to find out…

  20. Extra Dimensions: 3D and Time in PDF Documentation

    SciTech Connect

    Graf, Norman A.; /SLAC

    2011-11-10

    High energy physics is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide audience. In this talk, we present examples of HEP applications which take advantage of this functionality. We demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input. Using this technique, higher dimensional data, such as LEGO plots or time-dependent information can be included in PDF files. In principle, a complete event display, with full interactivity, can be incorporated into a PDF file. This would allow the end user not only to customize the view and representation of the data, but to access the underlying data itself.

  1. Robust 3D reconstruction system for human jaw modeling

    NASA Astrophysics Data System (ADS)

    Yamany, Sameh M.; Farag, Aly A.; Tazman, David; Farman, Allan G.

    1999-03-01

    This paper presents a model-based vision system for dentistry that will replace traditional approaches used in diagnosis, treatment planning and surgical simulation. Dentistry requires accurate 3D representation of the teeth and jaws for many diagnostic and treatment purposes. For example orthodontic treatment involves the application of force systems to teeth over time to correct malocclusion. In order to evaluate tooth movement progress, the orthodontists monitors this movement by means of visual inspection, intraoral measurements, fabrication of plastic models, photographs and radiographs, a process which is both costly and time consuming. In this paper an integrate system has been developed to record the patient's occlusion using computer vision. Data is acquired with an intraoral video camera. A modified shape from shading (SFS) technique, using perspective projection and camera calibration, is used to extract accurate 3D information from a sequence of 2D images of the jaw. A new technique for 3D data registration, using a Grid Closest Point transform and genetic algorithms, is used to register the SFS output. Triangulization is then performed, and a solid 3D model is obtained via a rapid prototype machine.

  2. 3D stereophotogrammetric image superimposition onto 3D CT scan images: the future of orthognathic surgery. A pilot study.

    PubMed

    Khambay, Balvinder; Nebel, Jean-Christophe; Bowman, Janet; Walker, Fraser; Hadley, Donald M; Ayoub, Ashraf

    2002-01-01

    The aim of this study was to register and assess the accuracy of the superimposition method of a 3-dimensional (3D) soft tissue stereophotogrammetric image (C3D image) and a 3D image of the underlying skeletal tissue acquired by 3D spiral computerized tomography (CT). The study was conducted on a model head, in which an intact human skull was embedded with an overlying latex mask that reproduced anatomic features of a human face. Ten artificial radiopaque landmarks were secured to the surface of the latex mask. A stereophotogrammetric image of the mask and a 3D spiral CT image of the model head were captured. The C3D image and the CT images were registered for superimposition by 3 different methods: Procrustes superimposition using artificial landmarks, Procrustes analysis using anatomic landmarks, and partial Procrustes analysis using anatomic landmarks and then registration completion by HICP (a modified Iterative Closest Point algorithm) using a specified region of both images. The results showed that Procrustes superimposition using the artificial landmarks produced an error of superimposition on the order of 10 mm. Procrustes analysis using anatomic landmarks produced an error in the order of 2 mm. Partial Procrustes analysis using anatomic landmarks followed by HICP produced a superimposition accuracy of between 1.25 and 1.5 mm. It was concluded that a stereophotogrammetric and a 3D spiral CT scan image can be superimposed with an accuracy of between 1.25 and 1.5 mm using partial Procrustes analysis based on anatomic landmarks and then registration completion by HICP.

  3. Enhanced LOD Concepts for Virtual 3d City Models

    NASA Astrophysics Data System (ADS)

    Benner, J.; Geiger, A.; Gröger, G.; Häfele, K.-H.; Löwner, M.-O.

    2013-09-01

    Virtual 3D city models contain digital three dimensional representations of city objects like buildings, streets or technical infrastructure. Because size and complexity of these models continuously grow, a Level of Detail (LoD) concept effectively supporting the partitioning of a complete model into alternative models of different complexity and providing metadata, addressing informational content, complexity and quality of each alternative model is indispensable. After a short overview on various LoD concepts, this paper discusses the existing LoD concept of the CityGML standard for 3D city models and identifies a number of deficits. Based on this analysis, an alternative concept is developed and illustrated with several examples. It differentiates between first, a Geometric Level of Detail (GLoD) and a Semantic Level of Detail (SLoD), and second between the interior building and its exterior shell. Finally, a possible implementation of the new concept is demonstrated by means of an UML model.

  4. The 3D Elevation Program: summary for Kentucky

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  5. The 3D Elevation Program: summary for Iowa

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  6. The 3D Elevation Program: summary for Pennsylvania

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  7. The 3D Elevation Program: summary for Wyoming

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  8. A Nonlinear Modal Aeroelastic Solver for FUN3D

    NASA Technical Reports Server (NTRS)

    Goldman, Benjamin D.; Bartels, Robert E.; Biedron, Robert T.; Scott, Robert C.

    2016-01-01

    A nonlinear structural solver has been implemented internally within the NASA FUN3D computational fluid dynamics code, allowing for some new aeroelastic capabilities. Using a modal representation of the structure, a set of differential or differential-algebraic equations are derived for general thin structures with geometric nonlinearities. ODEPACK and LAPACK routines are linked with FUN3D, and the nonlinear equations are solved at each CFD time step. The existing predictor-corrector method is retained, whereby the structural solution is updated after mesh deformation. The nonlinear solver is validated using a test case for a flexible aeroshell at transonic, supersonic, and hypersonic flow conditions. Agreement with linear theory is seen for the static aeroelastic solutions at relatively low dynamic pressures, but structural nonlinearities limit deformation amplitudes at high dynamic pressures. No flutter was found at any of the tested trajectory points, though LCO may be possible in the transonic regime.

  9. The 3D Elevation Program: summary for Hawaii

    USGS Publications Warehouse

    Carswell, William J.

    2016-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States, Hawaii, and selected U.S. territories, and quality level 5 interferometric synthetic aperture radar(ifSAR) data for Alaska, all with a 6- to 10-year acquisition cycle, provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other three-dimensional (3D) representations of the Nation’s natural and constructed features.

  10. The 3D Elevation Program: summary for Puerto Rico

    USGS Publications Warehouse

    Carswell, Jr., William J.

    2016-02-03

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States, Hawaii, and selected U.S. territories, and quality level 5 interferometric synthetic aperture radar (ifSAR) data for Alaska, all with a 6- to 10-year acquisition cycle, provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A‒16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other three-dimensional (3D) representations of the Nation’s natural and constructed features.

  11. The 3D Elevation Program: summary for New Hampshire

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  12. The 3D Elevation Program: summary for Mississippi

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  13. The 3D Elevation Program: summary for West Virginia

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  14. The 3D Elevation Program: summary for Arkansas

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  15. The 3D Elevation Program: summary for Florida

    USGS Publications Warehouse

    Carswell, William J.

    2013-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The new 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the OMB Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  16. The 3D Elevation Program: summary for Illinois

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  17. The 3D Elevation Program: summary for Louisiana

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  18. The 3D Elevation Program: summary for Georgia

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  19. The 3D Elevation Program: summary for Nevada

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  20. The 3D Elevation Program: summary for South Carolina

    USGS Publications Warehouse

    Carswell, William

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  1. The 3D Elevation Program: summary for Utah

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  2. The 3D Elevation Program: summary for Tennessee

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  3. The 3D Elevation Program: summary for Kansas

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  4. The 3D Elevation Program: summary for Oklahoma

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment (NEEA; Dewberry, 2011) evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  5. The 3D Elevation Program: summary for Indiana

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation's natural and constructed features.

  6. The 3D Elevation Program: summary for New Mexico

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 (table 1) for the conterminous United States and quality level 5 ifsar data (table 1) for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  7. The 3D Elevation Program: summary for Colorado

    USGS Publications Warehouse

    Carswell, William J.

    2013-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  8. The 3D Elevation Program: summary for Delaware

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  9. The 3D Elevation Program: summary for North Dakota

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  10. The 3D Elevation Program: summary for Maine

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  11. The 3D Elevation Program: summary for Arizona

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  12. The 3D Elevation Program: summary for Ohio

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation's natural and constructed features.

  13. The 3D Elevation Program: summary for Connecticut

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  14. The 3D Elevation Program: summary for Washington

    USGS Publications Warehouse

    Carswell, William J.

    2013-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  15. The 3D Elevation Program: summary for Montana

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The new 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  16. The 3D Elevation Program: summary for South Dakota

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment (NEEA; Dewberry, 2011) evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios.The new 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  17. The 3D Elevation Program: summary for New York

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  18. The 3D Elevation Program: summary for Oregon

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  19. The 3D Elevation Program: summary for Maryland

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  20. The 3D Elevation Program: Summary for New Jersey

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  1. The 3D Elevation Program: Summary for Massachusetts

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  2. The 3D Elevation Program: summary for Missouri

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  3. The 3D Elevation Program: summary for North Carolina

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment (NEEA; Dewberry, 2011) evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the use community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  4. The 3D Elevation Program: summary for Alabama

    USGS Publications Warehouse

    Carswell, William J.

    2013-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 ifsar data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The new 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A-16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  5. Getting in touch--3D printing in forensic imaging.

    PubMed

    Ebert, Lars Chr; Thali, Michael J; Ross, Steffen

    2011-09-10

    With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets, a 3D printer created colored models of the anatomical structures. Using this technique, we could create models of bone fractures, vessels, cardiac infarctions, ruptured organs as well as bitemark wounds. The final models are anatomically accurate, fully colored representations of bones, vessels and soft tissue, and they demonstrate radiologically visible pathologies. The models are more easily understood by laypersons than volume rendering or 2D reconstructions. Therefore, they are suitable for presentations in courtrooms and for educational purposes. PMID:21602004

  6. 3D root canal modeling for advanced endodontic treatment

    NASA Astrophysics Data System (ADS)

    Hong, Shane Y.; Dong, Janet

    2002-06-01

    More than 14 million teeth receive endodontic (root canal) treatment annually. Before a clinician's inspection and diagnosis, destructive access preparation by removing teeth crown and dentin is usually needed. This paper presents a non-invasive method for accessing internal tooth geometry by building 3-D tooth model from 2-D radiographic and endoscopic images to be used for an automatic prescription system of computer-aided treatment procedure planning, and for the root canal preparation by an intelligent micro drilling machine with on-line monitoring. It covers the techniques specific for dental application in the radiographic images acquirement, image enhancement, image segmentation and feature recognition, distance measurement and calibration, merging 2D image into 3D mathematical model representation and display. Included also are the methods to form references for irregular teeth geometry and to do accurately measurement with self-calibration.

  7. Recognition of 3-D Scene with Partially Occluded Objects

    NASA Astrophysics Data System (ADS)

    Lu, Siwei; Wong, Andrew K. C...

    1987-03-01

    This paper presents a robot vision system which is capable of recognizing objects in a 3-D scene and interpreting their spatial relation even though some objects in the scene may be partially occluded by other objects. An algorithm is developed to transform the geometric information from the range data into an attributed hypergraph representation (AHR). A hypergraph monomorphism algorithm is then used to compare the AHR of objects in the scene with a set of complete AHR's of prototypes. The capability of identifying connected components and interpreting various types of edges in the 3-D scene enables us to distinguish objects which are partially blocking each other in the scene. Using structural information stored in the primitive area graph, a heuristic hypergraph monomorphism algorithm provides an effective way for recognizing, locating, and interpreting partially occluded objects in the range image.

  8. The 3D Elevation Program: summary for Puerto Rico

    USGS Publications Warehouse

    Carswell, Jr., William J.

    2016-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States, Hawaii, and selected U.S. territories, and quality level 5 interferometric synthetic aperture radar (ifSAR) data for Alaska, all with a 6- to 10-year acquisition cycle, provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey (USGS), the Office of Management and Budget Circular A‒16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other three-dimensional (3D) representations of the Nation’s natural and constructed features.

  9. Dimensional accuracy of 3D printed vertebra

    NASA Astrophysics Data System (ADS)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  10. Stereo 3-D Vision in Teaching Physics

    NASA Astrophysics Data System (ADS)

    Zabunov, Svetoslav

    2012-03-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The current paper describes the modern stereo 3-D technologies that are applicable to various tasks in teaching physics in schools, colleges, and universities. Examples of stereo 3-D simulations developed by the author can be observed on online.

  11. Software for 3D radiotherapy dosimetry. Validation

    NASA Astrophysics Data System (ADS)

    Kozicki, Marek; Maras, Piotr; Karwowski, Andrzej C.

    2014-08-01

    The subject of this work is polyGeVero® software (GeVero Co., Poland), which has been developed to fill the requirements of fast calculations of 3D dosimetry data with the emphasis on polymer gel dosimetry for radiotherapy. This software comprises four workspaces that have been prepared for: (i) calculating calibration curves and calibration equations, (ii) storing the calibration characteristics of the 3D dosimeters, (iii) calculating 3D dose distributions in irradiated 3D dosimeters, and (iv) comparing 3D dose distributions obtained from measurements with the aid of 3D dosimeters and calculated with the aid of treatment planning systems (TPSs). The main features and functions of the software are described in this work. Moreover, the core algorithms were validated and the results are presented. The validation was performed using the data of the new PABIGnx polymer gel dosimeter. The polyGeVero® software simplifies and greatly accelerates the calculations of raw 3D dosimetry data. It is an effective tool for fast verification of TPS-generated plans for tumor irradiation when combined with a 3D dosimeter. Consequently, the software may facilitate calculations by the 3D dosimetry community. In this work, the calibration characteristics of the PABIGnx obtained through four calibration methods: multi vial, cross beam, depth dose, and brachytherapy, are discussed as well.

  12. [3D reconstructions in radiotherapy planning].

    PubMed

    Schlegel, W

    1991-10-01

    3D Reconstructions from tomographic images are used in the planning of radiation therapy to study important anatomical structures such as the body surface, target volumes, and organs at risk. The reconstructed anatomical models are used to define the geometry of the radiation beams. In addition, 3D voxel models are used for the calculation of the 3D dose distributions with an accuracy, previously impossible to achieve. Further uses of 3D reconstructions are in the display and evaluation of 3D therapy plans, and in the transfer of treatment planning parameters to the irradiation situation with the help of digitally reconstructed radiographs. 3D tomographic imaging with subsequent 3D reconstruction must be regarded as a completely new basis for the planning of radiation therapy, enabling tumor-tailored radiation therapy of localized target volumes with increased radiation doses and improved sparing of organs at risk. 3D treatment planning is currently being evaluated in clinical trials in connection with the new treatment techniques of conformation radiotherapy. Early experience with 3D treatment planning shows that its clinical importance in radiotherapy is growing, but will only become a standard radiotherapy tool when volumetric CT scanning, reliable and user-friendly treatment planning software, and faster and cheaper PACS-integrated medical work stations are accessible to radiotherapists.

  13. Virtual environment interaction through 3D audio by blind children.

    PubMed

    Sánchez, J; Lumbreras, M

    1999-01-01

    Interactive software is actively used for learning, cognition, and entertainment purposes. Educational entertainment software is not very popular among blind children because most computer games and electronic toys have interfaces that are only accessible through visual cues. This work applies the concept of interactive hyperstories to blind children. Hyperstories are implemented in a 3D acoustic virtual world. In past studies we have conceptualized a model to design hyperstories. This study illustrates the feasibility of the model. It also provides an introduction to researchers to the field of entertainment software for blind children. As a result, we have designed and field tested AudioDoom, a virtual environment interacted through 3D Audio by blind children. AudioDoom is also a software that enables testing nontrivial interfaces and cognitive tasks with blind children. We explored the construction of cognitive spatial structures in the minds of blind children through audio-based entertainment and spatial sound navigable experiences. Children playing AudioDoom were exposed to first person experiences by exploring highly interactive virtual worlds through the use of 3D aural representations of the space. This experience was structured in several cognitive tasks where they had to build concrete models of their spatial representations constructed through the interaction with AudioDoom by using Legotrade mark blocks. We analyze our preliminary results after testing AudioDoom with Chilean children from a school for blind children. We discuss issues such as interactivity in software without visual cues, the representation of spatial sound navigable experiences, and entertainment software such as computer games for blind children. We also evaluate the feasibility to construct virtual environments through the design of dynamic learning materials with audio cues.

  14. 3D PDF - a means of public access to geological 3D - objects, using the example of GTA3D

    NASA Astrophysics Data System (ADS)

    Slaby, Mark-Fabian; Reimann, Rüdiger

    2013-04-01

    In geology, 3D modeling has become very important. In the past, two-dimensional data such as isolines, drilling profiles, or cross-sections based on those, were used to illustrate the subsurface geology, whereas now, we can create complex digital 3D models. These models are produced with special software, such as GOCAD ®. The models can be viewed, only through the software used to create them, or through viewers available for free. The platform-independent PDF (Portable Document Format), enforced by Adobe, has found a wide distribution. This format has constantly evolved over time. Meanwhile, it is possible to display CAD data in an Adobe 3D PDF file with the free Adobe Reader (version 7). In a 3D PDF, a 3D model is freely rotatable and can be assembled from a plurality of objects, which can thus be viewed from all directions on their own. In addition, it is possible to create moveable cross-sections (profiles), and to assign transparency to the objects. Based on industry-standard CAD software, 3D PDFs can be generated from a large number of formats, or even be exported directly from this software. In geoinformatics, different approaches to creating 3D PDFs exist. The intent of the Authority for Mining, Energy and Geology to allow free access to the models of the Geotectonic Atlas (GTA3D), could not be realized with standard software solutions. A specially designed code converts the 3D objects to VRML (Virtual Reality Modeling Language). VRML is one of the few formats that allow using image files (maps) as textures, and to represent colors and shapes correctly. The files were merged in Acrobat X Pro, and a 3D PDF was generated subsequently. A topographic map, a display of geographic directions and horizontal and vertical scales help to facilitate the use.

  15. 3D Camouflage in an Ornithischian Dinosaur.

    PubMed

    Vinther, Jakob; Nicholls, Robert; Lautenschlager, Stephan; Pittman, Michael; Kaye, Thomas G; Rayfield, Emily; Mayr, Gerald; Cuthill, Innes C

    2016-09-26

    Countershading was one of the first proposed mechanisms of camouflage [1, 2]. A dark dorsum and light ventrum counteract the gradient created by illumination from above, obliterating cues to 3D shape [3-6]. Because the optimal countershading varies strongly with light environment [7-9], pigmentation patterns give clues to an animal's habitat. Indeed, comparative evidence from ungulates [9] shows that interspecific variation in countershading matches predictions: in open habitats, where direct overhead sunshine dominates, a sharp dark-light color transition high up the body is evident; in closed habitats (e.g., under forest canopy), diffuse illumination dominates and a smoother dorsoventral gradation is found. We can apply this approach to extinct animals in which the preservation of fossil melanin allows reconstruction of coloration [10-15]. Here we present a study of an exceptionally well-preserved specimen of Psittacosaurus sp. from the Chinese Jehol biota [16, 17]. This Psittacosaurus was countershaded [16] with a light underbelly and tail, whereas the chest was more pigmented. Other patterns resemble disruptive camouflage, whereas the chin and jugal bosses on the face appear dark. We projected the color patterns onto an anatomically accurate life-size model in order to assess their function experimentally. The patterns are compared to the predicted optimal countershading from the measured radiance patterns generated on an identical uniform gray model in direct versus diffuse illumination. These studies suggest that Psittacosaurus sp. inhabited a closed habitat such as a forest with a relatively dense canopy. VIDEO ABSTRACT. PMID:27641767

  16. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability.

  17. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  18. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability. PMID:25207828

  19. An aerial 3D printing test mission

    NASA Astrophysics Data System (ADS)

    Hirsch, Michael; McGuire, Thomas; Parsons, Michael; Leake, Skye; Straub, Jeremy

    2016-05-01

    This paper provides an overview of an aerial 3D printing technology, its development and its testing. This technology is potentially useful in its own right. In addition, this work advances the development of a related in-space 3D printing technology. A series of aerial 3D printing test missions, used to test the aerial printing technology, are discussed. Through completing these test missions, the design for an in-space 3D printer may be advanced. The current design for the in-space 3D printer involves focusing thermal energy to heat an extrusion head and allow for the extrusion of molten print material. Plastics can be used as well as composites including metal, allowing for the extrusion of conductive material. A variety of experiments will be used to test this initial 3D printer design. High altitude balloons will be used to test the effects of microgravity on 3D printing, as well as parabolic flight tests. Zero pressure balloons can be used to test the effect of long 3D printing missions subjected to low temperatures. Vacuum chambers will be used to test 3D printing in a vacuum environment. The results will be used to adapt a current prototype of an in-space 3D printer. Then, a small scale prototype can be sent into low-Earth orbit as a 3-U cube satellite. With the ability to 3D print in space demonstrated, future missions can launch production hardware through which the sustainability and durability of structures in space will be greatly improved.

  20. Wow! 3D Content Awakens the Classroom

    ERIC Educational Resources Information Center

    Gordon, Dan

    2010-01-01

    From her first encounter with stereoscopic 3D technology designed for classroom instruction, Megan Timme, principal at Hamilton Park Pacesetter Magnet School in Dallas, sensed it could be transformative. Last spring, when she began pilot-testing 3D content in her third-, fourth- and fifth-grade classrooms, Timme wasn't disappointed. Students…

  1. 3D, or Not to Be?

    ERIC Educational Resources Information Center

    Norbury, Keith

    2012-01-01

    It may be too soon for students to be showing up for class with popcorn and gummy bears, but technology similar to that behind the 3D blockbuster movie "Avatar" is slowly finding its way into college classrooms. 3D classroom projectors are taking students on fantastic voyages inside the human body, to the ruins of ancient Greece--even to faraway…

  2. 3D Printed Block Copolymer Nanostructures

    ERIC Educational Resources Information Center

    Scalfani, Vincent F.; Turner, C. Heath; Rupar, Paul A.; Jenkins, Alexander H.; Bara, Jason E.

    2015-01-01

    The emergence of 3D printing has dramatically advanced the availability of tangible molecular and extended solid models. Interestingly, there are few nanostructure models available both commercially and through other do-it-yourself approaches such as 3D printing. This is unfortunate given the importance of nanotechnology in science today. In this…

  3. Immersive 3D Geovisualization in Higher Education

    ERIC Educational Resources Information Center

    Philips, Andrea; Walz, Ariane; Bergner, Andreas; Graeff, Thomas; Heistermann, Maik; Kienzler, Sarah; Korup, Oliver; Lipp, Torsten; Schwanghart, Wolfgang; Zeilinger, Gerold

    2015-01-01

    In this study, we investigate how immersive 3D geovisualization can be used in higher education. Based on MacEachren and Kraak's geovisualization cube, we examine the usage of immersive 3D geovisualization and its usefulness in a research-based learning module on flood risk, called GEOSimulator. Results of a survey among participating students…

  4. 3D elastic control for mobile devices.

    PubMed

    Hachet, Martin; Pouderoux, Joachim; Guitton, Pascal

    2008-01-01

    To increase the input space of mobile devices, the authors developed a proof-of-concept 3D elastic controller that easily adapts to mobile devices. This embedded device improves the completion of high-level interaction tasks such as visualization of large documents and navigation in 3D environments. It also opens new directions for tomorrow's mobile applications.

  5. Static & Dynamic Response of 3D Solids

    1996-07-15

    NIKE3D is a large deformations 3D finite element code used to obtain the resulting displacements and stresses from multi-body static and dynamic structural thermo-mechanics problems with sliding interfaces. Many nonlinear and temperature dependent constitutive models are available.

  6. 3D Printing. What's the Harm?

    ERIC Educational Resources Information Center

    Love, Tyler S.; Roy, Ken

    2016-01-01

    Health concerns from 3D printing were first documented by Stephens, Azimi, Orch, and Ramos (2013), who found that commercially available 3D printers were producing hazardous levels of ultrafine particles (UFPs) and volatile organic compounds (VOCs) when plastic materials were melted through the extruder. UFPs are particles less than 100 nanometers…

  7. 3D Printing of Molecular Models

    ERIC Educational Resources Information Center

    Gardner, Adam; Olson, Arthur

    2016-01-01

    Physical molecular models have played a valuable role in our understanding of the invisible nano-scale world. We discuss 3D printing and its use in producing models of the molecules of life. Complex biomolecular models, produced from 3D printed parts, can demonstrate characteristics of molecular structure and function, such as viral self-assembly,…

  8. A 3D Geostatistical Mapping Tool

    SciTech Connect

    Weiss, W. W.; Stevenson, Graig; Patel, Ketan; Wang, Jun

    1999-02-09

    This software provides accurate 3D reservoir modeling tools and high quality 3D graphics for PC platforms enabling engineers and geologists to better comprehend reservoirs and consequently improve their decisions. The mapping algorithms are fractals, kriging, sequential guassian simulation, and three nearest neighbor methods.

  9. Pathways for Learning from 3D Technology

    ERIC Educational Resources Information Center

    Carrier, L. Mark; Rab, Saira S.; Rosen, Larry D.; Vasquez, Ludivina; Cheever, Nancy A.

    2012-01-01

    The purpose of this study was to find out if 3D stereoscopic presentation of information in a movie format changes a viewer's experience of the movie content. Four possible pathways from 3D presentation to memory and learning were considered: a direct connection based on cognitive neuroscience research; a connection through "immersion" in that 3D…

  10. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  11. Clinical applications of 3-D dosimeters

    NASA Astrophysics Data System (ADS)

    Wuu, Cheng-Shie

    2015-01-01

    Both 3-D gels and radiochromic plastic dosimeters, in conjunction with dose image readout systems (MRI or optical-CT), have been employed to measure 3-D dose distributions in many clinical applications. The 3-D dose maps obtained from these systems can provide a useful tool for clinical dose verification for complex treatment techniques such as IMRT, SRS/SBRT, brachytherapy, and proton beam therapy. These complex treatments present high dose gradient regions in the boundaries between the target and surrounding critical organs. Dose accuracy in these areas can be critical, and may affect treatment outcome. In this review, applications of 3-D gels and PRESAGE dosimeter are reviewed and evaluated in terms of their performance in providing information on clinical dose verification as well as commissioning of various treatment modalities. Future interests and clinical needs on studies of 3-D dosimetry are also discussed.

  12. Fabrication of 3D Silicon Sensors

    SciTech Connect

    Kok, A.; Hansen, T.E.; Hansen, T.A.; Lietaer, N.; Summanwar, A.; Kenney, C.; Hasi, J.; Da Via, C.; Parker, S.I.; /Hawaii U.

    2012-06-06

    Silicon sensors with a three-dimensional (3-D) architecture, in which the n and p electrodes penetrate through the entire substrate, have many advantages over planar silicon sensors including radiation hardness, fast time response, active edge and dual readout capabilities. The fabrication of 3D sensors is however rather complex. In recent years, there have been worldwide activities on 3D fabrication. SINTEF in collaboration with Stanford Nanofabrication Facility have successfully fabricated the original (single sided double column type) 3D detectors in two prototype runs and the third run is now on-going. This paper reports the status of this fabrication work and the resulted yield. The work of other groups such as the development of double sided 3D detectors is also briefly reported.

  13. BEAMS3D Neutral Beam Injection Model

    SciTech Connect

    Lazerson, Samuel

    2014-04-14

    With the advent of applied 3D fi elds in Tokamaks and modern high performance stellarators, a need has arisen to address non-axisymmetric effects on neutral beam heating and fueling. We report on the development of a fully 3D neutral beam injection (NBI) model, BEAMS3D, which addresses this need by coupling 3D equilibria to a guiding center code capable of modeling neutral and charged particle trajectories across the separatrix and into the plasma core. Ionization, neutralization, charge-exchange, viscous velocity reduction, and pitch angle scattering are modeled with the ADAS atomic physics database [1]. Benchmark calculations are presented to validate the collisionless particle orbits, neutral beam injection model, frictional drag, and pitch angle scattering effects. A calculation of neutral beam heating in the NCSX device is performed, highlighting the capability of the code to handle 3D magnetic fields.

  14. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  15. The psychology of the 3D experience

    NASA Astrophysics Data System (ADS)

    Janicke, Sophie H.; Ellis, Andrew

    2013-03-01

    With 3D televisions expected to reach 50% home saturation as early as 2016, understanding the psychological mechanisms underlying the user response to 3D technology is critical for content providers, educators and academics. Unfortunately, research examining the effects of 3D technology has not kept pace with the technology's rapid adoption, resulting in large-scale use of a technology about which very little